Programming for Embedded System vs Device Drivers [closed] - c

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
What is the difference between programming for embedded systems vs device drivers? Both areas deal with making the hardware do a specific task. I would appreciate an explanation. I have knowledge of C and i would like to go a bit deeper dealing with the hardware.

What is the difference between programming for embedded systems vs device drivers?
Writing a Device Driver means a very specific thing: writing low-level code that runs at elevated privilege in the kernel. It's quite tricky, but if your hardware is similar enough to existing hardware, you can sometimes "get by" by copying an existing driver and making a few changes. Writing a driver from scratch involves knowing the a lot about the kernel. Device Drivers are only written in C.
Writing for an "Embedded system" isn't very specific. Generally, it means "programming on a computer with fewer resources than a desktop PC, and maybe special hardware". There is no real line between "embedded computer" and "general purpose computer".
Everyone would agree that an 8-bit system with 128 bytes of RAM is "embedded programming" (Arduino). But the Rasberry PI (with GBs of RAM, hard drives, HDMI display) can be considered embedded or not depending on your view. If you unplug the monitor and put it on a robot, more people would say it requires embedded programming. People sometimes call programming apps for phones "embedded programming", but generally they call it "mobile" instead.
Embedded systems can be programmed in high level languages like Ruby/Python, or even shell scripts.
What are some purposes to programming device drivers
Well, any time you have a hardware device. These days, we have FUSE and USBLib, which blur the line. But if you want your wifi/webcam/usb port to be recognized by the OS, it needs a driver.
What cant you do programming wise for embedded systems that you can programming device drivers and vise versa?
As I said, embedded systems sometimes contain bash scripts (i.e. my home router).
I'm confused because they both deal with programming for hardware specifically on a low level.
There is some overlap, but they are quite distinct.
Embedded is an adjective that describes the whole system, while 'driver' refers to one specific tiny part of the system. You can do driver programming without doing embedded (i.e. writing a driver for a webcam on your desktop), and you can do embedded programming without writing new kernel drivers. (i.e. no need to write drivers if all your hardware is supported by the kernel.)
If i wanted to create a robot would this be under embedded systems or device drivers?
On-board robotic systems are usually embedded programming. It gets fuzzy if you strap a laptop to your robot -- people might say that's not embedded anymore, since it's a desktop OS. (Embedded systems rarely have a GUI, and if they do, it's rarely a mainstream one.)
Your robot may or may not require writing new drivers. Maybe the motor can be turned on from user space, so you don't need a driver. On the other hand, there are times where you need the extra features found only in the kernel: Faster response times, access control, etc. For example, if your program dies, it might leave the motor running, and that's bad. So you can write a kernel driver that will clean up for your program when the program exits. It's a little bit more work up front, but can make development simpler down the road.
What about making the GPU of a PC work for that O.S.? Would that be device drivers? If the hardware is stand alone without OSthen it is embedded?
Yes. Writing a GPU driver is writing kernel device driver code. (it's fuzzy these days because of libraries, but whatever.) If you wrote it on embedded hardware, you can call it both device driver and embedded programming.

The way you have posed the question the answer is there is no difference. you have asked what is the difference between an apple and an apple? None.
Now if you are wanting to say compare bare metal and linux device drivers? Well the linux device drivers have a lot of operating system api calls you have to make and have to conform to that sandbox, so there is a lot of work there on top of the poking and peeking of registers and memory of the various peripherals. If you go bare metal (no operating system) then you can do pretty much anything you want, you can create more work for yourself than a (linux) device driver or you could create less work for yourself.
You can go to the depth of a device driver, or all the way to bare metal it is your choice. As far as the peripheral is concerned the stuff you have to do to it or with it will be similar, the differences will have to do with dealing with the operating system vs dealing without an operating system.
Maybe you should pick a task and do that, something like send a byte out a serial port is a reasonable statement. Putting a pixel on a display (raspberry pi is an exception), anything graphics, anything usb, is not a reasonable statement, there is a considerable amount of overhead and knowledge and experience you would need before doing that. Blinking an led (basic gpio) reading a button, and uart tx and rx are generally where you get your feet wet with bare metal. Granted tty/uart stuff on linux is far from beginner stuff so you really just have to start trying things and failing and get up and try something else and see where that takes you. fortunately there are tons of simulators out there so you can do all of these things using free everything, simulators, toolchains, etc.

Related

how C programming allows the hardware level control?

I read somewhere that learning c programming gives us the actual idea of what is happening in the hardware level i.e. C programming teach us the real programming like how the memory is being utilised, how the hardware resources are used and it allows us to interfere with hardware level stuff like we are the one who can use and can control these resources in our own way as we want but other high level languages don't allow this.
Now I am learning C programming but I am not able to understand that how I am controlling my hardware resource ?
I have no idea how it is allowing us to use my computer resources independently.
In user mode, using a 32 or 64 bits multitask operating system, even C won't show you a tiny bit of hardware - lowest level you'll see is operating system itself.
You may ask the OS to draw a window, to save a file, to send data through a network - you won't touch directly GPU, disk controller or Ethernet MAC/Phy chip to be able to do that. In fact, you probably won't even be able to tell which KIND of hardware is behind... Is it a Nvidia card? An old SVGA one? A mechanical hard drive, or a NVMe drive? A 10BaseT NIC, or a 10 Gb/s optical fiber network card? You can't tell just with C. Only OS knows it, and it's OS that may tell it. You'll get that in C exactly like you would have got it with, let's say, Python.
To see hardware and how it works, you'll need to be able to touch hardware with software instructions. On a modern OS, it means being in kernel mode. Or to use an old-timer OS, like MS-DOS, or even no OS at all - called "bare metal development", often encountered with microcontrollers like Arduino and similar devices.
In this world, you'll need to learn what a register is, how GPIO works, how you address an UART, and if you use specific controllers, you'll have to read (and understand!) their datasheets if you want to make them work.
Indeed, it's often easier to do such low-level code in C, rather than in Assembler - especially since each CPU has its own assembler, so that may become a lot of languages to master in fine. But it's not mandatory. It can also be done with any language, as long as you can produce an absolute (=relocated), standalone (=no dependencies) and ROMable binary that can be written in Flash/EEPROM for your microcontroller. It can be done in assembler, C, C++, ADA for the most common ones, and virtualy any language that don't need a (too) big runtime library.

What's the point of a Linux character device driver if you can just use outb/inb from userspace? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I'm having a hard time understand when I should write a device driver instead of just sending opcodes directly to the hardware via outb from my userspace programs. I initially figured that I should create simple routines for the hardware, but now I'm starting to think that algorithms should stay in the userspace.
Suppose I'm programming a hypothetical robotic arm. I could write several functions in a Linux kernel module that would automate the hardware outputs needed for common tasks (e.g. move arm to HOME position, pickup new block from known location at start of assembly line, etc.). However, after reading more about device drivers, it seems that rule of thumb is to keep the device driver as close to hardware-specific code as possible, leaving the "heavy lifting" algorithms to userspace.
This confuses me, since if the only functions implemented by the device drivers are simple opcode calls, what's the reason for a userspace program to use the device file instead of calling outb/inb directly?
I suppose what I'm trying to figure out is: how do I decide what functionality goes in kernelspace instead of userspace?
Good question. I've wrestled with that - I've even written drivers to control robotic arms, when I knew for a fact it was not necessary. I could just as easily send commands thru a serial port, or outb(), etc. I only wrote those drivers for educational purposes.
There are a lot of good reasons for a device driver. Imagine trying to control your network card directly from userspace! First of all, a driver gives you a nice abstraction at the OS level (eth0, etc). But trying to handle interrupts for packet send/receive in userspace would be wildly impractical from a performance standpoint - maybe even impossible. Just the time it takes to respond to an interrupt in userspace would drag the interface to its knees.
Imagine further that you bought a new network card. Wouldn't it be nice to just load the new driver and continue talking to eth0 from userspace with no changes to your code?
So, I would say "there is no point" in writing a driver if you don't see the need. I think the existence of drivers is driven by the need (as in the NIC driver example), not the other way around.
It sounds like for your app, outb() is going to be much more straightforward than creating a driver. In the end, I didn't even use my robotic arm drivers - just writing bytes to the serial port worked just as well - and only required a few lines of code ;-)
If you use outb and inb in userspace, then your userspace will be x86-specific - the userspace outb() and inb() macros are implemented with x86 assembly. On the other hand, if you write a kernel driver than your driver will work on any PCI-supporting architecture - the inb() and outb() functions in the kernel are implemented in an architecture-specific manner. The kernel also gives you functions like request_region() to ensure that your IO ports don't clash with any other driver.
Furthermore, your userspace driver would need to run as root (or technically with the CAP_SYS_RAWIO capability, which is root-equivalent). A character device driver in the kernel would mean that you can use the UNIX permissions on the character device file to control which userspace user can access the device.
A device driver must implement only the mechanism to handle the hardware (It does not matter the operating system).
All the intelligence of a solution must live in user-space.
Yes, you can do everything in user-space but:
it is not re-usable; other user-space programs must re-implement the mechanism to gain access to the robotic arm (for example)
bad performance; it depends on the application, maybe it is not a problem for a robotic arm (is slow), but it can be a problem for a network card, a disk, graphic card
So, for a robotic arm, you should implement the mechanism in the driver (move motor, get information from sensors). So, your program and other program can use the driver to make something clever with the arm. The clever stuff is done by the user-space program: paint the Gioconda, prepare a cake, to move dynamite carefully. The driver is the implementation of basic functions to allow its users to use the hardware.
But obviously, it depends on the hardware and context.

How do Video Game Emulators Work? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am curious as to how emulators work. What are they written in? Does it have to emulate even the graphics? How do people get the games uploaded as roms? Do they simulate the systems OS?
There are several emulation techniques. The first technique is called low level emulation. The emulator in this case can be written in practically any langauge, however because of the large amount of binary data manipulation, C and C++ lend themselves well to such a task, though there are plenty of other languages that are capable of providing such.
With low level emulation the program simulates the exact hardware of the original system. For example, the original NES has well defined hardware both from official documentation and information from reverse engineering. We know exactly how its 6502-based CPU behaves along with the graphics, sound chips, etc. With low level emulation, the exact binary data of the original game is interpreted in software in exactly the same way that the original hardware interprets the data. This includes the original machine code written for the 6502 instruction set, the graphics data, the IO, everything. The graphics and sound hardware are emulated by translating instructions for the original hardware into modern hardware by calling modern graphics and sound APIs to fulfill them.
This technique is the most accurate and successful but is also the slowest and sometimes the most difficult to implement for complex machines.
The second method is called static recompilation. The original machine code for the original system is analyzed and then recompiled for a modern computer. This technique produces the fastest emulation but has a really low rate of success. Emulators employing this technique could, at best, only support a few demos and games. The reason why is that often the runtime environment that the original software expects changes in such a way that is hard or impossible to know at compile time.
The final technique is called dynamic recompilation. In this technique the emulator analyzes the code and recompiles it as it is running. This allows the compiler to tailor the runtime environment to what the original software expects based on information available as the program is running.
Involved in most forms of recompilation techniques is something called High Level Emulation. This is the observation that most code is simply code compiled to call operating system or library C routines. The code is recompiled to the host machine, and the calls to the original operating system and libraries, such as those for graphics and sound, are reimplemented natively instead of being emulated. For example, if there is a call to draw a triangle on the screen, the emulator can simply perform the operation directly without having to emulate the exact low level implementation of communicating the draw command to the original graphics hardware. This is how almost all Nintendo 64 and PlayStation emulators and work.
The original operating systems only sometimes need to be re-implemented. For example, the Nintendo 64 actually didn't have an operating system, each cartridge was its own OS per se. The emulator, however, recognized common routines that all ROMs implemented and dynamically captured and reimplemented them. The playstation, however, had a proprietary BIOS used for setting up the basic hardware and reading the game from the CD. Emulators have to have a copy of this BIOS or attempt to reimplement its functionality.
We know that emulators using dynamic recompilation have been implemented inside, for example, the Xbox 360 in order to play original Xbox games. Such a task would be very difficult for outside developers, but simpler for Microsoft who has all of the original and proprietary documentation and the manpower to create and optimize such an emulator. In this case, the entire original Xbox operating system does not need to be emulated, however the calls that the original games make to the original operating system have to be translated into the native operating system. The technique for the Xbox One to emulate the Xbox 360 is similar, except in order to have a greater degree of compatibility with Xbox 360 titles in emulation they chose to run the original Xbox 360 operating system in their emulator.
Games from game cartridges are moved onto a computer through hardware which is specially designed for ROM dumping. ROMs on the older machines actually behave in a really simple manner. They have address input lines and data output lines. A device can be constructed using a micro controller to dump these ROMs and then transfer them to a computer using Serial, USB or some other method. Some ROMs can even be read through a computer's programmable Parallel port, largely missing in modern PCs but USB adapters for them exist.
Because of the massive amounts of dynamic code generation, emulators that use recompilation techniques almost exclusively use C or C++, however any language capable of systems programming and low level code interfacing at run-time is capable of doing this.

Cross-platform (microcontroller-PC) algorithm development

I was asked to develop a algorithm for network application on C. This project will be developed on Linux for PC and then it will be transferred to a more portable platform, something that will include a microcontroller. There are many microcontroller/companies out there that provide very nice and large libraries for TCP/IP. This software will hold statistics on the network performance.
The whole idea of a cross platform (uC - PC) seems rubbish to me cause eventually the code should be written in a more platform specific way for the microcontroller, but I am not expert to judge anyway.
Is there any clever way of doing this or is there a anyone that did this before? My brainstorming has "Wrapper library" and "Matlab"... Any ideas?
Thx!
I do agree with you to some extent - you do want the target system and the system on which you are developing in the interim should be as close as possible (it is better if they can match). Nevertheless the idea with cross-platform is to get you started with the firmware development while the hardware is being designed. Instead of doing it on Linux - what I would do is to use Embedded OS simulator. Here are the steps
- Step 1: Identify the OS for the Embedded System; make sure that OS has a simulator that runs on PC (Win or Linux) Typical Embedded OS with Simulator include VxWorks, μC/OS-II, QNX, uClinux ... Agreeing on the OS means that the hardware design team knows that the OS is the right match for the hardware that is being designed and there is a consensus that the hardware + OS + Application being designed will meet the requirements of the system that is being developed.
- Step 2: Use this simulator to develop the application until the hardware that is being designed is brought up.
- Step 3: Once the first version of the hardware is ready and has been powered up - you can run your application with minimum changes - mostly likely no changes to the code, but changes to the linker/library being used is likely.
The idea of cross-platform if done correct has immense advantages - it helps remove serializing your project development activities.
Given that you mention it is a TCP/IP application - check for Berkeley Sockets support and you use it. Usually this API should not matter if you are using a Simulator, in the extreme case if you have to change the OS for whatever reason your Berkeley Sockets based application is likely to be better portable.
Just assume you can use the standard BSD socket library (system calls are socket(), bind(), accept(), connect(), recv(), send(), with various options). Any OS with a TCP/IP stack will support this standard API.
There may be some caveats that you will run into if your embedded system uses a run to completion type TCP/IP stack like *u*IP, but those will be easily solvable.
Also only use POSIX file I/O (fopen, fread, fwrite, printf, etc). But keep in mind your target may not have a filesystem.
If using a simulator was not an option I would try to wrap the Linux functions up in interfaces that match those of the embedded system, if possible. That way any extra bulk in the system will be on the Linux development system (which is not resource constrained). Various embedded OSes and TCP/IP stacks can have vastly different architectures, so how easy this is can range from nearly impossible to no work at all.
If it turns out that writing wrapper libraries to make Linux look like the embedded system is too difficult then I suggest at least trying to keep the embedded OS in mind while writing the Linux version so that you can try to at least write some functions so that they work on both systems.
If it doesn't take too long writing a Linux version of at least part of the code may help you to shake out a few flaws in the overall design, at the very least. At most it will allow you to more quickly test changes to the system since loading code onto an embedded device often takes more time than you would like. It may also be easier to debug on your development machine.
Some embedded OSes will run on x86, and it would not surprise me if some of them have drivers that allow them to be run in virtual machines, so this may be an option as well.
Another thing to consider is the endian-ness and the word size of the development machine verses the embedded system. If these differ then you need to keep this in mind as you code. Getting this type of thing right when you originally write the code is easier than going back and trying to fix code, in my opinion.

Download control board software simulators

I am interested in learning how to do embedded system programming in c. However, I will need some hardware.
I am wondering is there any software that can simulate what the control board will do?
The control board is listed in the following tutorial
http://www.learn-c.com/hardware.htm
Many thanks for any advice
The board you linked to is not an embedded system board, it is an I/O interface for a PC. If you want to simulate that, you can simply write PC code stubs for the I/O functions that simulate connected devices' behaviour. However, you will not learn much about embedded systems from this. You may learn a little about PC based control, but since the board does not support interrupts or DMA, I suggest again that you will not learn much of that either.
Moreover the board is designed for an ISA bus slot. Modern PCs no longer have such slots. And modern operating systems prevent access to hardware I/O in user level code.
If you are serious about learning embedded systems development, you might for example download Keil's MDK-ARM evaluation; it includes an ARM simulator with on-chip peripheral simulation for a number of commonly available ARM based micro-controllers, and real hardware is available at reasonable cost.
If PC based control is of more interest, then you would be better off starting with a USB based I/O device, such as this example.

Resources