NASM accessing sound card directly (No OS) - c

I'm attempting to write a very simple OS in ASM and C. (NASM assembler)
I would like to access the sound card directly, with or without drivers.
If I don't need drivers, how could I access and send a sample audio file
to the sound card? (An example would be nice)
If I do need drivers, is there anyway to interface them and call functions
from the drivers? And how do I access and send a sample audio file to the
sound card? (Another example would be nice)

I hate to discourage you, but modern sound card drivers are extremely complicated, and as you probably know, OS-specific. This is one of the difficult challenges in OS development - driver support. It's not something that can be achieved with a simple code snippet.
In order to load a file, you need a file system. Have you implemented that yet? The fact that you used the "kernel" flag suggests that your OS is still in its infancy. I'm not sure I would want to put sound support into the kernel of an operating system.
That being said, there is a good emulator called Bochs that has Sound Blaster 16 emulation. And some really old documentation for how to program it. This might be your best bet. Accessing sound hardware was much easier back in the day.

Your best bet is probably to look at either the Linux or FreeBSD sound drivers and see what they do. You're not likely to get much better implementation documentation for any but the simplest sound card...
This is a hard problem. Be warned :-p

Of course you need a driver, and of course there's no easy way to interface with existing ones (there was some proposal for a unified OS-agnostic "Uniform Driver Interface" - but I don't think it got anywhere).
So, after you've written the code to read a file from your hard drive, you'll need to roll your own audio driver.
Now, I haven't done this in a while, so this may be outdated, but in the 90's you'd configure your sound card with a few 'out dx, al' (details varied across soundcards), and then setup DMA to send data from a memory buffer to your card. The card (or was it the DMA controller?) would fire off an interrupt when it reached the end of the buffer, which you'd use to fill the buffer with new data.
If your card has a working linux driver I'd start by looking at its code. Otherwise, you'll have to reverse engineer the windows driver, Soft-Ice's bpio (break on io port access) logging used to be good for that iirc.
Good luck.

Here is a free open-sourced operating system written in all assembly. It is great reference for assembly kernel programming if you are new to it.
http://www.menuetos.net/index.htm

Related

how C programming allows the hardware level control?

I read somewhere that learning c programming gives us the actual idea of what is happening in the hardware level i.e. C programming teach us the real programming like how the memory is being utilised, how the hardware resources are used and it allows us to interfere with hardware level stuff like we are the one who can use and can control these resources in our own way as we want but other high level languages don't allow this.
Now I am learning C programming but I am not able to understand that how I am controlling my hardware resource ?
I have no idea how it is allowing us to use my computer resources independently.
In user mode, using a 32 or 64 bits multitask operating system, even C won't show you a tiny bit of hardware - lowest level you'll see is operating system itself.
You may ask the OS to draw a window, to save a file, to send data through a network - you won't touch directly GPU, disk controller or Ethernet MAC/Phy chip to be able to do that. In fact, you probably won't even be able to tell which KIND of hardware is behind... Is it a Nvidia card? An old SVGA one? A mechanical hard drive, or a NVMe drive? A 10BaseT NIC, or a 10 Gb/s optical fiber network card? You can't tell just with C. Only OS knows it, and it's OS that may tell it. You'll get that in C exactly like you would have got it with, let's say, Python.
To see hardware and how it works, you'll need to be able to touch hardware with software instructions. On a modern OS, it means being in kernel mode. Or to use an old-timer OS, like MS-DOS, or even no OS at all - called "bare metal development", often encountered with microcontrollers like Arduino and similar devices.
In this world, you'll need to learn what a register is, how GPIO works, how you address an UART, and if you use specific controllers, you'll have to read (and understand!) their datasheets if you want to make them work.
Indeed, it's often easier to do such low-level code in C, rather than in Assembler - especially since each CPU has its own assembler, so that may become a lot of languages to master in fine. But it's not mandatory. It can also be done with any language, as long as you can produce an absolute (=relocated), standalone (=no dependencies) and ROMable binary that can be written in Flash/EEPROM for your microcontroller. It can be done in assembler, C, C++, ADA for the most common ones, and virtualy any language that don't need a (too) big runtime library.

How to switch from VGA to SVGA in OS programming

I'm trying to understand all OS theory. But here a problem, I can't find any information on the net to switch in SVGA (or HDMI) to draw on a monitor. I already know we have 4x4096 KB allocated as video memory for VGA but really, it's limited if we want 1080x720 resolution. So
- How to switch to SVGA (or HDMI) ? Probably a syscall or I/O request ?
- After that, how to re-define the address of the video memory ?
- Bonus question : how to use the technology of accelerated hardware ?
Thank you in advance for your answer and sorry for my eventuals wrongs in English
The only common low-level standard for resolutions bigger than VGA is VESA BIOS Extensions (VBE). I don't know how widely it's covered with the UEFI backward compatibility, because it's almost never used nowadays.
BIOS Extensions are like drivers built into the card itself. To switch the video mode, or to have pointer to VRAM, you have to call the proper VBE service via it's interrupt. The ROM "driver", designed to work with a certain hardware, performs the needed operations and returns the result.
Unfortunately, no hardware acceleration were covered by VBE, so they became more and more obsolete as GPU became more and more important. No suitable replacement was developed, so if you want to work with bare hardware, you must know every video chip (or, at least, a chip family if they're close enough) and write a driver for each one. If PDFs are free, it's easy (I've worked with 3dfx, it's simple: write to the port N, wait until bit R on port M becomes 1 etc).
The problem is, you have to do it for every chip.
You can also read some Linux driver sources, if you want to see how all those ports and I/O are triggered.

What's the point of a Linux character device driver if you can just use outb/inb from userspace? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I'm having a hard time understand when I should write a device driver instead of just sending opcodes directly to the hardware via outb from my userspace programs. I initially figured that I should create simple routines for the hardware, but now I'm starting to think that algorithms should stay in the userspace.
Suppose I'm programming a hypothetical robotic arm. I could write several functions in a Linux kernel module that would automate the hardware outputs needed for common tasks (e.g. move arm to HOME position, pickup new block from known location at start of assembly line, etc.). However, after reading more about device drivers, it seems that rule of thumb is to keep the device driver as close to hardware-specific code as possible, leaving the "heavy lifting" algorithms to userspace.
This confuses me, since if the only functions implemented by the device drivers are simple opcode calls, what's the reason for a userspace program to use the device file instead of calling outb/inb directly?
I suppose what I'm trying to figure out is: how do I decide what functionality goes in kernelspace instead of userspace?
Good question. I've wrestled with that - I've even written drivers to control robotic arms, when I knew for a fact it was not necessary. I could just as easily send commands thru a serial port, or outb(), etc. I only wrote those drivers for educational purposes.
There are a lot of good reasons for a device driver. Imagine trying to control your network card directly from userspace! First of all, a driver gives you a nice abstraction at the OS level (eth0, etc). But trying to handle interrupts for packet send/receive in userspace would be wildly impractical from a performance standpoint - maybe even impossible. Just the time it takes to respond to an interrupt in userspace would drag the interface to its knees.
Imagine further that you bought a new network card. Wouldn't it be nice to just load the new driver and continue talking to eth0 from userspace with no changes to your code?
So, I would say "there is no point" in writing a driver if you don't see the need. I think the existence of drivers is driven by the need (as in the NIC driver example), not the other way around.
It sounds like for your app, outb() is going to be much more straightforward than creating a driver. In the end, I didn't even use my robotic arm drivers - just writing bytes to the serial port worked just as well - and only required a few lines of code ;-)
If you use outb and inb in userspace, then your userspace will be x86-specific - the userspace outb() and inb() macros are implemented with x86 assembly. On the other hand, if you write a kernel driver than your driver will work on any PCI-supporting architecture - the inb() and outb() functions in the kernel are implemented in an architecture-specific manner. The kernel also gives you functions like request_region() to ensure that your IO ports don't clash with any other driver.
Furthermore, your userspace driver would need to run as root (or technically with the CAP_SYS_RAWIO capability, which is root-equivalent). A character device driver in the kernel would mean that you can use the UNIX permissions on the character device file to control which userspace user can access the device.
A device driver must implement only the mechanism to handle the hardware (It does not matter the operating system).
All the intelligence of a solution must live in user-space.
Yes, you can do everything in user-space but:
it is not re-usable; other user-space programs must re-implement the mechanism to gain access to the robotic arm (for example)
bad performance; it depends on the application, maybe it is not a problem for a robotic arm (is slow), but it can be a problem for a network card, a disk, graphic card
So, for a robotic arm, you should implement the mechanism in the driver (move motor, get information from sensors). So, your program and other program can use the driver to make something clever with the arm. The clever stuff is done by the user-space program: paint the Gioconda, prepare a cake, to move dynamite carefully. The driver is the implementation of basic functions to allow its users to use the hardware.
But obviously, it depends on the hardware and context.

Cross-platform (microcontroller-PC) algorithm development

I was asked to develop a algorithm for network application on C. This project will be developed on Linux for PC and then it will be transferred to a more portable platform, something that will include a microcontroller. There are many microcontroller/companies out there that provide very nice and large libraries for TCP/IP. This software will hold statistics on the network performance.
The whole idea of a cross platform (uC - PC) seems rubbish to me cause eventually the code should be written in a more platform specific way for the microcontroller, but I am not expert to judge anyway.
Is there any clever way of doing this or is there a anyone that did this before? My brainstorming has "Wrapper library" and "Matlab"... Any ideas?
Thx!
I do agree with you to some extent - you do want the target system and the system on which you are developing in the interim should be as close as possible (it is better if they can match). Nevertheless the idea with cross-platform is to get you started with the firmware development while the hardware is being designed. Instead of doing it on Linux - what I would do is to use Embedded OS simulator. Here are the steps
- Step 1: Identify the OS for the Embedded System; make sure that OS has a simulator that runs on PC (Win or Linux) Typical Embedded OS with Simulator include VxWorks, μC/OS-II, QNX, uClinux ... Agreeing on the OS means that the hardware design team knows that the OS is the right match for the hardware that is being designed and there is a consensus that the hardware + OS + Application being designed will meet the requirements of the system that is being developed.
- Step 2: Use this simulator to develop the application until the hardware that is being designed is brought up.
- Step 3: Once the first version of the hardware is ready and has been powered up - you can run your application with minimum changes - mostly likely no changes to the code, but changes to the linker/library being used is likely.
The idea of cross-platform if done correct has immense advantages - it helps remove serializing your project development activities.
Given that you mention it is a TCP/IP application - check for Berkeley Sockets support and you use it. Usually this API should not matter if you are using a Simulator, in the extreme case if you have to change the OS for whatever reason your Berkeley Sockets based application is likely to be better portable.
Just assume you can use the standard BSD socket library (system calls are socket(), bind(), accept(), connect(), recv(), send(), with various options). Any OS with a TCP/IP stack will support this standard API.
There may be some caveats that you will run into if your embedded system uses a run to completion type TCP/IP stack like *u*IP, but those will be easily solvable.
Also only use POSIX file I/O (fopen, fread, fwrite, printf, etc). But keep in mind your target may not have a filesystem.
If using a simulator was not an option I would try to wrap the Linux functions up in interfaces that match those of the embedded system, if possible. That way any extra bulk in the system will be on the Linux development system (which is not resource constrained). Various embedded OSes and TCP/IP stacks can have vastly different architectures, so how easy this is can range from nearly impossible to no work at all.
If it turns out that writing wrapper libraries to make Linux look like the embedded system is too difficult then I suggest at least trying to keep the embedded OS in mind while writing the Linux version so that you can try to at least write some functions so that they work on both systems.
If it doesn't take too long writing a Linux version of at least part of the code may help you to shake out a few flaws in the overall design, at the very least. At most it will allow you to more quickly test changes to the system since loading code onto an embedded device often takes more time than you would like. It may also be easier to debug on your development machine.
Some embedded OSes will run on x86, and it would not surprise me if some of them have drivers that allow them to be run in virtual machines, so this may be an option as well.
Another thing to consider is the endian-ness and the word size of the development machine verses the embedded system. If these differ then you need to keep this in mind as you code. Getting this type of thing right when you originally write the code is easier than going back and trying to fix code, in my opinion.

input and output without a library in C

I'm writing a small kernel for my programs in C.
This is not (at the moment) an OS kernel, it's merely a way for me to keep track of input and output in programs without relying on external source (i.e. stdio.h). You might ask me why I'd ever want to do this; it's just so I know how this works, and so that I have more, and more (end goal is total) control of program flow.
I was wondering if anyone knows some tutorials on input and output in C (with inline asm?) without relying on any other code.
There is a lot of room between the bare metal and stdio. You have said you aren't writing an OS kernel, but not whether or not you are running under an OS.
Running directly on hardware without an OS, you will still want to encapsulate all of your I/O operations in a module, even if you don't formally define a device driver interface and framework for all of your I/O modules to follow. This is hugely architecture dependent, and makes you responsible for knowing all of the details of interaction with every I/O device you might ever use. For some devices, this can quickly become a huge development effort. That isn't a problem for embedded systems, but running on commercial hardware this way is neither easy nor recommended.
Running within an OS, you probably don't get (and shouldn't want to get) access to the actual hardware registers and interrupts. If you are developing a custom I/O device, the best practice is to make it conform to existing standards so that you need as little low level custom software for it as possible. This is why you see a lot of custom user interface gadgets connecting via USB and identifying themselves as HIDs (Human Interface Devices). As a HID, the existing USB drivers take care of the physical layer, and the OS-supplied HID driver takes care of the logical interface, providing a very simple high level access API to the application.
One of the operating system's key roles is to provide a consistent I/O API across all devices. Generally, that takes the form of open(), close(), read(), write(), and ioctl() functions (the names vary, but some form of at least the first four will always exist). The OS layer is quite raw, however. Typically, an OS call is forwarded without much processing to a device driver, which then forwards the data on to the device. Usually, the OS low level calls block the caller until they complete, and often they have restrictions on the sizes of the buffers that make sense. For instance, raw access to a disk device is usually required to be for an integral number of disk blocks at a time.
And don't forget about things like file systems and network protocols... all of which are made much more reliable and compatible by encapsulation within an operating system.
Even if it is acceptable to call read() and write() for single characters, that is usually not the best performance possible. Operating system calls are relatively expensive, and if you can read multiple characters in a single call, your performance can go way up.
That is the origin of the stdio library for C, and various other buffering libraries in other environments. The stdio library provides a buffering layer that isolates the C code from the block size of the underlying hardware. Even on an entirely home-grown operating system where you have full control over all the devices, something like C stdio will still be valuable.
Writing your own stdio replacement is a highly valuable exercise, even if you don't use it in production code, and is one I would recommend to anyone wanting to learn about what really goes on between printf() and scanf() and the terminal or files.
One valuable resource is the book The Standard C Library by P.J. Plauger. In it, the author presents an implementation of the complete C runtime library specified in the ANSI standard. His discussion of the specific implementation choices he made is valuable and apropos to the context of this question, and the discussions of why some of the standard library features were specified is interesting as well.
This sort of thing is very architecture specific. To put it simply, your I/O devices will raise hardware interrupts to the CPU. The CPU will call the code associated with the interrupt which will deal with it appropriately; for an input device it will fetch the data that is available from the device, for an output device the interrupt usually means that the device is ready to send the next piece.
The old 8088/8086 CPU architecture is a nice simple place to start to get your head around this. Typically, the BIOS would be where the hardware interrupts would have been handled, but it was always possible to write your own. ;)

Resources