Windows - Write to the video display directly using kernel driver? - c

I am in the process of writing a windows 7 kernel driver. Today, I want to have it output debug information via the display. Now I know that I can just use the DbgPrint or the KdPrint functions, but I want to output my string or anything, directly to my monitor. This way, I don't have to fire up Debug View to see my output. This also serves as an educational exercise.
As I understand it, I will have to access the frame buffer for the display and write my values to it, correct? However, I have no clue how to do this. Basically, I want to be able to write something onto the monitor directly, thus it would overlap anything that windows is displaying. I know this might sound weird, but its just for fun.
The main goal is to do this from a KERNEL DRIVER. Not inside a userland process. Note that I am only wanting to use a 640x480 resolution. Not anything higher. If I understand correctly, anything higher than this would require me to write my own display driver for my current video card.
My system setup:
Windows 7 SP1 x86
Intel Pentium 4 # 3.00Ghz
Nvidia GeForce FX 5200
Thanks for all of you help in advance!

Now I know that I can just use the DbgPrint or the KdPrint functions, but I want to output my string or anything, directly to my monitor.
You'll have to go through the display driver. Who says that your computer running windows has a monitor at all?
Even if it has, there won't be a MSDOS-style single framebuffer in RAM that stores your current picture. Modern GPUs just don't work that way any more – instead, operating systems ask them to compose the whole screen of single buffers they hand over for compositing – in simplification: Every window is its own framebuffer, and it's the GPU's job to compose everything to a whole screen picture.
You also can't just write to some memory region from your kernel driver just because you like to do so – a) you don't know where that would be and b) you'd compete with other components, and that would be a bad thing.
EDIT I feel like I should add this for posteriority:
the point is very simple: write a driver that is a driver and not a user interface. That's not the job of a driver. Putting UI functionality into a driver is a bad idea for many reasons, and you simply shouldn't do it.

Related

How to make a program run by BIOS?

I searched for info about this but didn't find anything.
The idea is:
If I code a program in C, or any other languages, what else do I need to do for it to get recognized in BIOS and started by it as a DOS program or just a prompt program?
I got this idea after I booted an flash drive with windows using the ISO and Rufus, which put some code in the flash drive for the BIOS to recognize it and run, so I would like to do the same with a program of mine, for example.
Thanks in advance!
An interesting, but rather challenging exercise!
The BIOS will fetch a specific zone from the boot device, called a master boot record. In a "normal" situation with an OS and one or more partitions, the MBR will need to figure out where to find the OS, load that into memory, and pass control to it. At that time the regular boot sequence starts and somewhat later the OS will be running and be able to interact with you. More detail on the initial activities can be found here
Now, for educational purposes, this is not strictly necessary. You could write an MBR that just reads in a fixed part of the disk (the BIOS has functions that will allow you to read raw sectors off a disk, a disk can be considered as just a bunch of sectors each containing 512 bytes of information) and starts that code. You can find an open source MBR here and basically in any open source OS.
That was the "easy" part, because now you probably want to do something interesting. Unless you want to interact with each part of the hardware yourself, you will have to rely on the services provided by the BIOS to interact with keyboard, screen and disk. The traditionally best source about BIOS services is Ralf Brown's interrupt list.
One specific consideration: your C compiler comes with a standard library, and that library will need a specific OS for many of its operations (eg, to perform output to the screen, it will ask the operating system to perform that output, and the OS will typically use the BIOS or some direct access to the hardware to perform that task). So, in going the route explained above, you will also need to figure out a way to replace these services by some that use the BIOS and nothing more - ie, more or less rewrite the standard library.
In short, to arrive at something usable, you will be writing the essential parts of an operating system...
Actually BIOS is going to be dead in the next two years (INTEL will not support any BIOSes after this date) so you may want to learn UEFI standard. UEFI from v2.4 allows to write and add custom UEFI applications. (BTW the "traditional" BIOS settings on the UEFI computers is often implemented as a custom UEFI App).

Switching to a higher resolution

Recently, I started developing an operating system in NASM and C. I have already made a boot loader, kernel, filesystem, etc. So far I used the VGA text mode directly in order to write to the address 0x000B8000. So, I decided to switch to video mode instead of text mode. I chose maximal display resolution 320x200, but then I realised that there are three problems. Firstly, there are only 256 different colors. Secondly, the resolution is too small. Thirdly, writing to the address 0x000A0000 is too slow. I tried to do some animations, but it is very laggy and sometimes it waits more than one second before the next frame.
I have searched on the internet for some explanations on how to switch to higher resolutions such as 1920x1080 and how to use 256*256*256 colors instead of just 256. Everything I found said that it is very hard to use higher resolutions because you must develop drivers for all the different types of graphics cards and for some cards there are no documentations, so we must use reverse engineering.
I really want to introduce high-resolution graphics to my operating system. Is it really hard or is there any easy method? Any suggestions on how I can solve this?
Nearly every graphics adapter supports VESA framebuffer semantics, you can configure almost every video mode with that. The drawback is that you cannot use vendor specific features (accelerated graphics etc.)
The VESA-Xserver for example works with almost any graphics adapter (but the model specific ones are considerably faster)
See also: https://en.wikipedia.org/wiki/VESA_BIOS_Extensions
You can do high res VESA graphics in assembly and it should be fast enough (in the beginning phase when you are learning and not doing very fancy 3d stuff, especially).
First of all, make sure you are using a good emulator/virtual machine for testing. I was using QEMU and it was way to slow to do any graphics at only 640x480x24bpp. I switched to VirtualBox and though it starts up quite slowly, I have never looked back.
As for the programming part, I encourage you to look at a project called Pure64. You can find it on GitHub. Go to src/init/isa.asm and look at the end of the file - there is some code to do VESA initializations. I am actually using Pure64 to set up a clean 64bit environment and I am doing VESA graphics so I can say that it works fine.
The VESA init consists of two parts - getting mode info and setting the video mode. Once you get the mode info, you get a Video Base Pointer to a region of memory which is continuous and where you can write your pixels without switching banks and doing complicated stuff. At least in 64-bit mode.
The only problem I had with this is that I could not make 32bpp mode working. I can do 24bpp, which is RRGGBB - 3 bytes per pixel (exactly like HTML/CSS color codes). As with everything that comprises of 3 bytes on a binary computer, this makes some things a bit more complex (at least for a beginner). Getting 4 bytes per pixel to work still eludes me. Maybe this is a limitation of VirtualBox or something.
This all means that for basic hi-res graphics there is no need to do a lot of hardware-specific things. If you are on a mildly current hardware, you should do fine.

How to directly access the display for drawing

Context
I've been programming mainly as a hobby for some time now, mostly in C# and Java. I made many application (Windows Forms or Java Forms) that required animated content. In Java I would use Graphics.drawX() and redraw in function of time. When the animations were happening frequently the resolution would diminish or the application would slow down. I never gave it thought until I played a video game on the same computer that had so much trouble rendering a simple Java app. How can my computer instantly render a complex moving environment but rush a displaying a home-made 2048 game? I figured it must be because either I am misusing the draw functions, either because those functions are not optized for real-time render.
Question :
How can I directly access the display without having to go through preprogrammed functions?
I realize this maybe hard in higher level languages so let's say in C on a Windows OS. (But I would appreciate any answer relating to any language and/or OS)
I know it's a really vague question but I can't seem to find the right words to Google it appropriatly. Thank you very much for your help!
You can't (or maybe I should say should never) try to access the graphics driver directly on Windows. You used to have write directly to video memory to do graphics prior to Windows as DOS did not support graphics or display management and the stability of those programs were always a bit dicey. On Windows, it owns the screen and you have to work through it to access it.
The very concept of a Windows-based OS is that the OS owns the display and gives application access to a virtual display so that the OS can hide it or move it around. In most cases this does not cause a speed problem; but, in certain cases like gaming you need more speed; so, DirectX allows you tor transfer some of those task to the graphics card to get you the speed you need.
For more info on DirectX, check out Microsoft's Graphics and Gaming Resources

Run executable on MINI2440 with NO OS

I have Fedora installed on my PC and I have a Friendly ARM Mini2440 board. I have successfully installed Linux kernel and everything is working. Now I have some image processing program, which I want to run on the board without OS. The only process running on board should be my program. And in that program how can I access the on board camera to take image from, and serial port to send output to the PC.
You're talking about what is often called a bare-metal environment. Google can help you, for example here. In a bare-metal environment you have to have a good understanding of your hardware because you have to take care of a lot of things that the OS normally handles.
I've been working (off and on) on bare-metal support for my ELLCC cross development tool-chain. I have the ARM implementation pretty far along but there is still quite a bit of work to do. I have written about some of my experiences on my blog.
First off, you have to get your program started. You'll need to write some start-up code, usually in assembly, to handle the initialization of the processor as it comes out of reset (or is powered on). The start-up code then typically passes control to code written in C that ultimately directly or indirectly calls your main() function. Getting to main() is a huge step in your bare-metal adventure!
Next, you need to decide how to support your hardware's I/O devices which in your case include the camera and serial port. How much of the standard C (or C++) library does your image processing require? You might need to add some support for functions like printf() or malloc() that normally need some kind of OS support. A simple "hello world" would be a good thing to try next.
ELLCC has examples of various levels of ARM bare-metal in the examples directory. They range from a simple main() up to and including MMU and TCP/IP support. The source for all of it can be browsed here.
I started writing this before I left for work this morning and didn't have time to finish. Both dwelch and Clifford had good suggestions. A bootloader might make your job a lot simpler and documentation on your hardware is crucial.
First you must realise that without an OS, you are responsible for bringing the board up from reset including configuring the PLL and SDRAM, and also for the driver code for every device on the board you wish to use. To do that required adequate documentation of the board and it devices.
It is possible that you can use the existing bootloader to configure the core and SDRAM, but that may not meet your requirement for the only process running on the board should be your image processing program.
Additionally you will need some means of loading and bootstrapping; again the existing Linux bootstrapper may suit.
It is by no means straightforward and cannot really be described in detail here.

Creating a touch screen driver for OS X: where to start?

OK, so I recently purchased an Acer T232HL touch screen display to hook up to my Macbook Pro as a secondary monitor. To give you an idea, here's my setup.
OS X doesn't support this monitor in any way, so as you can see in the screenshot I'm actually running Windows 8 through VMware, which proxies the USB connection to Windows perfectly where the touch events are supported. But obviously this isn't ideal.
There's at least one 3rd party driver for OS X that looked sort of promising, but it doesn't seem to support multitouch from this device, it's expensive, and generally was a pain to get working to the small degree it was. There's also mt4j but best I could tell after running their examples, it doesn't support this device at all.
So here's my question: what exactly am I in for if I wanted to write a driver for this thing? I'm mostly a web developer with years of experience with Ruby, Objective-C (and a little C), Javascript, etc. I have never ventured into any kind of hardware programming, so from the surface this feels like an interesting while intimidating challenge.
I know at some level I need to read data from USB. I know this will probably mean trying to reverse engineer whatever protocol they're using for the touch events (is it possible this will be entirely custom?). However I haven't got a clue where to start - would this be a kernel extension? In C, I presume? Would love a high level overview of the moving parts involved here.
Ultimately I want to use the touch screen to drive a specialized web interface (running in Chrome), so ideally I could proxy the touch events straight to Chrome without the OS actually moving the mouse cursor to the touch location (so have the UI behave just as it would on an iPad), but regardless of whether this is technically possible, I'd love to start with just getting something working.
You're going to want to start with Apple's I/O Kit documentation. You can hope that the touchscreen isn't completely custom, though there must be some part that's not standard USB HID, or it would work already. If there are any linux (or other open source) drivers available, you'll have the advantage that somebody already did some of the reverse engineering for you. As an alternative to the I/O Kit, you might also want to look into libusb, which might make your brain hurt less when getting started. If you end up needing to write a kext, that might not help you anymore, though.
As to some of your specific questions:
would this be a kernel extension?
Maybe, maybe not. I'm not really up on the Mac OS X driver situation, but I did write some totally user-space USB code for OS X many years ago. Maybe you'll be as lucky.
In C, I presume?
Probably. I/O Kit itself is written in a subset of C++, so you can probably use that too, if you prefer.
is it possible this will be entirely custom?
Unfortunately, yes, it's possible.
Good luck!

Resources