How to switch from VGA to SVGA in OS programming - c

I'm trying to understand all OS theory. But here a problem, I can't find any information on the net to switch in SVGA (or HDMI) to draw on a monitor. I already know we have 4x4096 KB allocated as video memory for VGA but really, it's limited if we want 1080x720 resolution. So
- How to switch to SVGA (or HDMI) ? Probably a syscall or I/O request ?
- After that, how to re-define the address of the video memory ?
- Bonus question : how to use the technology of accelerated hardware ?
Thank you in advance for your answer and sorry for my eventuals wrongs in English

The only common low-level standard for resolutions bigger than VGA is VESA BIOS Extensions (VBE). I don't know how widely it's covered with the UEFI backward compatibility, because it's almost never used nowadays.
BIOS Extensions are like drivers built into the card itself. To switch the video mode, or to have pointer to VRAM, you have to call the proper VBE service via it's interrupt. The ROM "driver", designed to work with a certain hardware, performs the needed operations and returns the result.
Unfortunately, no hardware acceleration were covered by VBE, so they became more and more obsolete as GPU became more and more important. No suitable replacement was developed, so if you want to work with bare hardware, you must know every video chip (or, at least, a chip family if they're close enough) and write a driver for each one. If PDFs are free, it's easy (I've worked with 3dfx, it's simple: write to the port N, wait until bit R on port M becomes 1 etc).
The problem is, you have to do it for every chip.
You can also read some Linux driver sources, if you want to see how all those ports and I/O are triggered.

Related

how C programming allows the hardware level control?

I read somewhere that learning c programming gives us the actual idea of what is happening in the hardware level i.e. C programming teach us the real programming like how the memory is being utilised, how the hardware resources are used and it allows us to interfere with hardware level stuff like we are the one who can use and can control these resources in our own way as we want but other high level languages don't allow this.
Now I am learning C programming but I am not able to understand that how I am controlling my hardware resource ?
I have no idea how it is allowing us to use my computer resources independently.
In user mode, using a 32 or 64 bits multitask operating system, even C won't show you a tiny bit of hardware - lowest level you'll see is operating system itself.
You may ask the OS to draw a window, to save a file, to send data through a network - you won't touch directly GPU, disk controller or Ethernet MAC/Phy chip to be able to do that. In fact, you probably won't even be able to tell which KIND of hardware is behind... Is it a Nvidia card? An old SVGA one? A mechanical hard drive, or a NVMe drive? A 10BaseT NIC, or a 10 Gb/s optical fiber network card? You can't tell just with C. Only OS knows it, and it's OS that may tell it. You'll get that in C exactly like you would have got it with, let's say, Python.
To see hardware and how it works, you'll need to be able to touch hardware with software instructions. On a modern OS, it means being in kernel mode. Or to use an old-timer OS, like MS-DOS, or even no OS at all - called "bare metal development", often encountered with microcontrollers like Arduino and similar devices.
In this world, you'll need to learn what a register is, how GPIO works, how you address an UART, and if you use specific controllers, you'll have to read (and understand!) their datasheets if you want to make them work.
Indeed, it's often easier to do such low-level code in C, rather than in Assembler - especially since each CPU has its own assembler, so that may become a lot of languages to master in fine. But it's not mandatory. It can also be done with any language, as long as you can produce an absolute (=relocated), standalone (=no dependencies) and ROMable binary that can be written in Flash/EEPROM for your microcontroller. It can be done in assembler, C, C++, ADA for the most common ones, and virtualy any language that don't need a (too) big runtime library.

How to write data to a graphics card without using BIOS?

I want to make an (extremely simple) operating system. I am currently learning about graphics cards.
This is what I know so far (please correct me if I am wrong):
A graphics card has two modes: a text mode, and a graphics mode.
You can write data to a graphics cards using BIOS (instead of accessing the graphics card directly).
What I want to do is to write directly to the graphics card's video memory without using BIOS (because I want to understand how things work). So I have the following questions:
How do I know what is the base address of the video memory of the graphics card, is this
done by probing the PCI bus to get the base address, or is the base address fixed (just like the COM ports base addresses is fixed for example)?
Are all graphics cards accessed in the same way, or do I have to create device drivers for all available graphics cards?
Edit: I am using x86.
Introduction
Graphics cards are a very complex topic, I'm confident in saying that they are the most complex subsystem you'll find on a PC.
If you ever found yourself lost programming an XHCI (USB 3.0) or an old RTL8239A network interface card then be prepared because this is much more complex.
Graphics controllers are the products of a very competitive marketing - rarely a vendor opens the specifications and when it does, it gives an intentionally poor support.
If you add that the hardware itself deals with: codecs, audio (yes, audio streams too), 3D programmable pipelines, video signals and video outputs, surface formats, media formats, DMA and memory remapping then you can see that it is not an easy task to program a video card.
The better approach, in my opinion, is to "retrace the history" of the video cards.
Start from the MDA then move to CGA then EGA and finally to VGA.
The VGA legacy is still supported, the specifications can be found here or in the first part of this PDF from Intel.
You can program the VGA without the BIOS "easily" - meaning that it is an already well-known and documented hardware architecture (but not necessarily easy to configure).
I don't remember if the previous adapters were subsets of the VGA or not, if not they aren't supported anymore probably.
You can try with a virtual machine or an emulator.
When you are satisfied with the VGA you can move to the SVGA.
Here come the troubles: as Wikipedia confirms, the VGA was the last truly standardised video card/adapter interface:
Unlike VGA—a purely IBM-defined standard—Super VGA was never formally defined.
The organisation VESA standardised a BIOS API called Video BIOS Extensions to allow the use of SVGA cards to driverless OSes but that's not what you were looking for.
You can try reverse engineering a VBE BIOS but I think it will be a nightmare - a senseless stream of writes to IO ports and MMIOs.
Making sense of tenths of configuration registers without any reference is almost impossible.
Note that we are still talking about 1998 technology up to this point.
After the VESA VBE effort, no more standard interfaces have been published - the only reliable way to program a video card with less than 20 years is by signing an NDA with its vendor.
Luckily, recently (actually, not anymore), Intel entered the market with its Intel GFX (a.k.a. Intel HD Graphics) cards.
Intel never aimed to manufacture top-of-the-notch video cards, not even closely - so they can be open about their architecture since that's not their core business.
The result is this marvellous set of Programming Reference Manuals that describe the functionality of their video cards.
Complete with (traditionalistic) minimal information to program them.
In general, hobbyists stops before this point (at the SVGA checkpoint), because the hardware has become very complex and the efforts very huge.
For example, my Haswell integrated video card is documented with 17 PDFs of about 250 pages each (on average).
The display part is documented in a PDF on its own, the framebuffer has disappeared in favour of Display surface and the display part alone of the hardware is this:
While this may not be very comprehensible, it should suffice to get an idea of the numerous technology that a programmer must understand before programming a modern video card.
You can surely take a look at the Linux source code but beware that the Linux kernel is no usually of immediate understanding even for simple controllers - it is not a toy OS, it is a real OS with its own API and interface that must fit the hardware interface (actually the other way around).
Furthermore, only the Intel and AMD video drivers are really open source, the others are either proprietary or just a bunch of undocumented code.
Brief outline of common VGA modes programming
If you just want to program the VGA (a very respectable task indeed!) you can start by setting the video modes 03h (text mode) or 13h (graphics mode).
Video mode 03h
The frame buffer is at 0b8000h (physical address), usually accessed as 0b800h:0000h as it is handy to have a zero offset.
The screen is made up of 80x25 characters, each characters occupy a word (16-bit) in the frame buffer.
The low byte is the character code - the character map used will associate a glyph to a code (e.g. 41h to A).
The high order byte is the attribute byte - the low nibble is the foreground colour, the high nibble is the background colour.
More information can be found in the EGA/CGA/VGA links above.
Video mode 13h
It is a graphical mode with 320x200 pixel, the frame buffer is at 0a0000h (physical address) usually accessed as 0a000h:0000h for the same reason of above.
Each pixel is a single byte, the value of the byte selects the colour of the pixel.
The default palette can be changed by programming the DAC registers (3c7h, 3c8h, 3c9h for the VGA adapter).
Answers
A graphics card has two modes: a text mode, and a graphics mode.
Not necessarily, today this distinction may not exist anymore.
The MDA had only a text mode.
EGA, CGA and VGA and SVGA had both.
The modern approach is to draw the text, however during boot or during particular situations (e.g. BSOD) a basic video driver in text mode is used.
This driver probably uses a BIOS service since the video driver may not be available/reliable.
You can write data to a graphics cards using BIOS
Up to the SVGA era, then BIOS support was discontinued.
How do I know what is the base address of the video memory of the graphics card, is this done by probing the PCI bus to get the base address, or is the base address fixed (just like the COM ports base addresses is fixed for example)?
Video cards have been connected through the history to the ISA, PCI, AGP and PCIe buses.
Only the ISA bus wasn't configurable (at least not from the beginning), the others had configurable BARs (Base Address Registers) per function (the smallest addressable entity in the PCI bus).
In order to get the base address of the MMIO registers of a video card the PCI or PCIe bus must be enumerated and the standard registers in the configuration space must be read/set.
Dealing with PCIe is not as easy as dealing with PCI.
Note that not even the UARTs have a fixed address, they are configured by default to map to the legacy (3f8h, 2f8h, 3e8h and 2e8h) addresses but the hardware was (is?) in a SuperIO chip behind a PCI-to-LPC bridge that emulated a PCI-to-ISA bridge.
With the advent of the Intel platform hub architecture (i.e. the death of the north and south bridge) the SuperIO chip eventually made it into the PCH or moved behind the SPI controller.
Are all graphics cards accessed in the same way, or do I have to create device drivers for all available graphics cards?
Each graphic card is a beautiful vicious creature on its own.
A device driver is needed for each model.
Some driver can be reused for a whole family of models but this is not true in general.

Programming for Embedded System vs Device Drivers [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
What is the difference between programming for embedded systems vs device drivers? Both areas deal with making the hardware do a specific task. I would appreciate an explanation. I have knowledge of C and i would like to go a bit deeper dealing with the hardware.
What is the difference between programming for embedded systems vs device drivers?
Writing a Device Driver means a very specific thing: writing low-level code that runs at elevated privilege in the kernel. It's quite tricky, but if your hardware is similar enough to existing hardware, you can sometimes "get by" by copying an existing driver and making a few changes. Writing a driver from scratch involves knowing the a lot about the kernel. Device Drivers are only written in C.
Writing for an "Embedded system" isn't very specific. Generally, it means "programming on a computer with fewer resources than a desktop PC, and maybe special hardware". There is no real line between "embedded computer" and "general purpose computer".
Everyone would agree that an 8-bit system with 128 bytes of RAM is "embedded programming" (Arduino). But the Rasberry PI (with GBs of RAM, hard drives, HDMI display) can be considered embedded or not depending on your view. If you unplug the monitor and put it on a robot, more people would say it requires embedded programming. People sometimes call programming apps for phones "embedded programming", but generally they call it "mobile" instead.
Embedded systems can be programmed in high level languages like Ruby/Python, or even shell scripts.
What are some purposes to programming device drivers
Well, any time you have a hardware device. These days, we have FUSE and USBLib, which blur the line. But if you want your wifi/webcam/usb port to be recognized by the OS, it needs a driver.
What cant you do programming wise for embedded systems that you can programming device drivers and vise versa?
As I said, embedded systems sometimes contain bash scripts (i.e. my home router).
I'm confused because they both deal with programming for hardware specifically on a low level.
There is some overlap, but they are quite distinct.
Embedded is an adjective that describes the whole system, while 'driver' refers to one specific tiny part of the system. You can do driver programming without doing embedded (i.e. writing a driver for a webcam on your desktop), and you can do embedded programming without writing new kernel drivers. (i.e. no need to write drivers if all your hardware is supported by the kernel.)
If i wanted to create a robot would this be under embedded systems or device drivers?
On-board robotic systems are usually embedded programming. It gets fuzzy if you strap a laptop to your robot -- people might say that's not embedded anymore, since it's a desktop OS. (Embedded systems rarely have a GUI, and if they do, it's rarely a mainstream one.)
Your robot may or may not require writing new drivers. Maybe the motor can be turned on from user space, so you don't need a driver. On the other hand, there are times where you need the extra features found only in the kernel: Faster response times, access control, etc. For example, if your program dies, it might leave the motor running, and that's bad. So you can write a kernel driver that will clean up for your program when the program exits. It's a little bit more work up front, but can make development simpler down the road.
What about making the GPU of a PC work for that O.S.? Would that be device drivers? If the hardware is stand alone without OSthen it is embedded?
Yes. Writing a GPU driver is writing kernel device driver code. (it's fuzzy these days because of libraries, but whatever.) If you wrote it on embedded hardware, you can call it both device driver and embedded programming.
The way you have posed the question the answer is there is no difference. you have asked what is the difference between an apple and an apple? None.
Now if you are wanting to say compare bare metal and linux device drivers? Well the linux device drivers have a lot of operating system api calls you have to make and have to conform to that sandbox, so there is a lot of work there on top of the poking and peeking of registers and memory of the various peripherals. If you go bare metal (no operating system) then you can do pretty much anything you want, you can create more work for yourself than a (linux) device driver or you could create less work for yourself.
You can go to the depth of a device driver, or all the way to bare metal it is your choice. As far as the peripheral is concerned the stuff you have to do to it or with it will be similar, the differences will have to do with dealing with the operating system vs dealing without an operating system.
Maybe you should pick a task and do that, something like send a byte out a serial port is a reasonable statement. Putting a pixel on a display (raspberry pi is an exception), anything graphics, anything usb, is not a reasonable statement, there is a considerable amount of overhead and knowledge and experience you would need before doing that. Blinking an led (basic gpio) reading a button, and uart tx and rx are generally where you get your feet wet with bare metal. Granted tty/uart stuff on linux is far from beginner stuff so you really just have to start trying things and failing and get up and try something else and see where that takes you. fortunately there are tons of simulators out there so you can do all of these things using free everything, simulators, toolchains, etc.

Any open-source ARM7 emulators suitable for linking with C?

I have an open-source Atari 2600 emulator (Z26), and I'd like to add support for cartridges containing an embedded ARM processor (NXP 21xx family). The idea would be to simulate the 6507 until it tries to read or write a byte of memory (which it will do every 841ns). If the 6507 performs a write, put the address and data on some of the ARM's I/O ports and let the ARM code run 20 cycles, confirm that the ARM is floating its data bus, and let the ARM run for another 38 cycles. If the 6507 performs a read, put the address on the ARM's I/O ports, let the ARM run 38 cycles, grab the data from the ARM's I/O port (hopefully the ARM software will have put it there), and let the ARM run another 20 cycles.
The ARM7 seems pretty straightforward to implement; I don't need to simulate a whole lot of hardware features. Any thoughts?
Edit
What I have in mind would be a routine that would take as a parameter a struct holding the machine state and pointers to a memory access routine. When called, the routine would emulate the ARM's instruction engine, generating appropriate reads, writes, and code fetches. I could then write the memory access routine to regard appropriate areas as flash (with roughly-approximated wait states), RAM, I/O ports, and timer registers. Some other areas would be marked as don't-care, and accesses to any other areas would flag an error and stop the emulator.
Perhaps QEMU uses such a thing internally. Since the ARM emulation would be integrated into an already-existing emulation engine (which I didn't write and don't fully understand--the only parts of Z26 I've patched have been the memory read/write logic) I would need something with a fairly small footprint.
Any idea how QEMU works inside? Any idea what the GPL licence would require if I just use 2% of the code in QEMU--whether I'd have to bundle the code for the whole thing, or just the part that I use, or what?
Try QEMU.
With some work, you can make my emulator do what you want. It was written for ARM920, and the Thumb instruction set isn't done yet. Neither is the MMU/cache interface. Also, it's slow because it is an interpreter. On the bright side, it's all written in C99.
http://code.google.com/p/gp2xemu/
I haven't worked on it for a while (The svn trunk is 2 years old), but if you're going to use the code, I'll be glad to help you out with the missing features. It is licensed under MIT, so it's just the same as the broad BSD license.

NASM accessing sound card directly (No OS)

I'm attempting to write a very simple OS in ASM and C. (NASM assembler)
I would like to access the sound card directly, with or without drivers.
If I don't need drivers, how could I access and send a sample audio file
to the sound card? (An example would be nice)
If I do need drivers, is there anyway to interface them and call functions
from the drivers? And how do I access and send a sample audio file to the
sound card? (Another example would be nice)
I hate to discourage you, but modern sound card drivers are extremely complicated, and as you probably know, OS-specific. This is one of the difficult challenges in OS development - driver support. It's not something that can be achieved with a simple code snippet.
In order to load a file, you need a file system. Have you implemented that yet? The fact that you used the "kernel" flag suggests that your OS is still in its infancy. I'm not sure I would want to put sound support into the kernel of an operating system.
That being said, there is a good emulator called Bochs that has Sound Blaster 16 emulation. And some really old documentation for how to program it. This might be your best bet. Accessing sound hardware was much easier back in the day.
Your best bet is probably to look at either the Linux or FreeBSD sound drivers and see what they do. You're not likely to get much better implementation documentation for any but the simplest sound card...
This is a hard problem. Be warned :-p
Of course you need a driver, and of course there's no easy way to interface with existing ones (there was some proposal for a unified OS-agnostic "Uniform Driver Interface" - but I don't think it got anywhere).
So, after you've written the code to read a file from your hard drive, you'll need to roll your own audio driver.
Now, I haven't done this in a while, so this may be outdated, but in the 90's you'd configure your sound card with a few 'out dx, al' (details varied across soundcards), and then setup DMA to send data from a memory buffer to your card. The card (or was it the DMA controller?) would fire off an interrupt when it reached the end of the buffer, which you'd use to fill the buffer with new data.
If your card has a working linux driver I'd start by looking at its code. Otherwise, you'll have to reverse engineer the windows driver, Soft-Ice's bpio (break on io port access) logging used to be good for that iirc.
Good luck.
Here is a free open-sourced operating system written in all assembly. It is great reference for assembly kernel programming if you are new to it.
http://www.menuetos.net/index.htm

Resources