Switching to a higher resolution - c

Recently, I started developing an operating system in NASM and C. I have already made a boot loader, kernel, filesystem, etc. So far I used the VGA text mode directly in order to write to the address 0x000B8000. So, I decided to switch to video mode instead of text mode. I chose maximal display resolution 320x200, but then I realised that there are three problems. Firstly, there are only 256 different colors. Secondly, the resolution is too small. Thirdly, writing to the address 0x000A0000 is too slow. I tried to do some animations, but it is very laggy and sometimes it waits more than one second before the next frame.
I have searched on the internet for some explanations on how to switch to higher resolutions such as 1920x1080 and how to use 256*256*256 colors instead of just 256. Everything I found said that it is very hard to use higher resolutions because you must develop drivers for all the different types of graphics cards and for some cards there are no documentations, so we must use reverse engineering.
I really want to introduce high-resolution graphics to my operating system. Is it really hard or is there any easy method? Any suggestions on how I can solve this?

Nearly every graphics adapter supports VESA framebuffer semantics, you can configure almost every video mode with that. The drawback is that you cannot use vendor specific features (accelerated graphics etc.)
The VESA-Xserver for example works with almost any graphics adapter (but the model specific ones are considerably faster)
See also: https://en.wikipedia.org/wiki/VESA_BIOS_Extensions

You can do high res VESA graphics in assembly and it should be fast enough (in the beginning phase when you are learning and not doing very fancy 3d stuff, especially).
First of all, make sure you are using a good emulator/virtual machine for testing. I was using QEMU and it was way to slow to do any graphics at only 640x480x24bpp. I switched to VirtualBox and though it starts up quite slowly, I have never looked back.
As for the programming part, I encourage you to look at a project called Pure64. You can find it on GitHub. Go to src/init/isa.asm and look at the end of the file - there is some code to do VESA initializations. I am actually using Pure64 to set up a clean 64bit environment and I am doing VESA graphics so I can say that it works fine.
The VESA init consists of two parts - getting mode info and setting the video mode. Once you get the mode info, you get a Video Base Pointer to a region of memory which is continuous and where you can write your pixels without switching banks and doing complicated stuff. At least in 64-bit mode.
The only problem I had with this is that I could not make 32bpp mode working. I can do 24bpp, which is RRGGBB - 3 bytes per pixel (exactly like HTML/CSS color codes). As with everything that comprises of 3 bytes on a binary computer, this makes some things a bit more complex (at least for a beginner). Getting 4 bytes per pixel to work still eludes me. Maybe this is a limitation of VirtualBox or something.
This all means that for basic hi-res graphics there is no need to do a lot of hardware-specific things. If you are on a mildly current hardware, you should do fine.

Related

Changing display modes from the command line

Way way back in the day Itried to learn C from a game programming book. If I recall correctly, one of the first things your game "engine" would do would be to switch display modes to render. This involved a bit of asm to switch to a 640x480 display mode (mode 13 maybe?) so you could draw directly to the screen. Something like that.
My question is, what is the modern equivalent of this? I'm interested in writing a command line program that does something similar; drops into some kind of raster mode for me to draw to, but, I do not assume that my program would be running under some kind of window manager like kde, unity,Aqua etc.
Would this be something that OpenGL could provide (or does OpenGL assume a window manager too). My proposed program isn't a game, but would ideally start with a basic clear screen that I can draw primitives (2d lines, circles rects etc)
Cheers!
Modern operating systems don't give programmers as convenient access to low-level graphics routines as they used to. Partially, this is due to the advent of the GPU, which makes utilizing the graphics hardware a much more significant challenge than if you only had a CPU. The other reason is as window managers have gotten more and more complex, the graphical sandbox each operating system gives a programmer is more constrained.
That being said, OpenGL is definitely worth looking at. Its cross-platform, versatile, and automatically utilizes any hardware available (including the graphics card). OpenGL itself doesn't directly provide access to a windowing context, but you can easily create that with the OpenGL utility library (GLUT). OpenGL is however very low-level, you'll have to deal with frame buffers and flushing and bit masks and all sorts of low-level nonsense that can make OpenGL development a nightmare if you haven't done it before.
If I were starting a project, I would probably want a more robust graphics environment that provides drawing functions and windowing out of the box. Both SDL and SFML provide low-level graphics APIs that will be a little more friendly to start with. They are both implemented on top of OpenGL so you can use any of the OpenGL features when you want to, but you don't have to worry about some of the more tedious details.
As a side note, C might not be the best language to get started with graphics programming these days. If you want a really simple graphics environment that is becoming more relevant every day, you might want to check out what the web has to provide. JavaScript and the HTML5Canvas provide a very simple interface for drawing primitives, images, etc.

Simulate Screen

Is there any way to make a Cinema (or any type of) Display in C?
I have seen some code but there is almost no documentation.
Implementing IOFramebuffer, like EWFrameBuffer etc. does is the way to go for creating a graphics driver. There is a little bit of breakage in various versions, but it's possible to get things working nicely, including retina resolutions, with some trial and error. Hardware acceleration is separate:
Older versions of OSX used the IOGraphicsAcceleratorInterface for 2D acceleration if your driver provided a CFPlugin bundle that implemented it together with your kext.
I haven't figured it out on Yosemite; it seems that it doesn't use 2D acceleration. To make things worse, software rendering performance is also considerably worse on Yosemite than on previous releases. I encourage anyone who is affected by this (headless mac mini, OS X in VMs, virtual displays, etc.) to file a Radar with Apple. I have already done so, but the more people complain, the more likely it is that they'll do something about it.
The 3D acceleration (OpenGL) APIs are private on all versions. I'm not aware of any 3rd party implementation of them, open source or otherwise, unless you count the Intel/AMD/nVidia GPU drivers, which seem to be developed in cooperation between Apple and the relevant company.
UPDATE: It turns out that Yosemite's WindowServer limits frame rates to about 8fps unless your IOFramebuffer driver correctly implements vertical blank interrupts. So if your driver doesn't already do so, implement the methods registerForInterruptType(), unregisterInterrupt and setInterruptState work with interrupt type kIOFBVBLInterruptType, and generate a callback every time you finish emitting a full image. The details of this will depend on your device (or lack thereof). This doesn't solve the hardware acceleration and rendering glitch issues, but it does at least improve performance somewhat (at the cost of higher CPU load).

Displaying CUDA-processed images in WPF

I have a WPF application that acquires images from a camera, processes these images, and displays them. The processing part has become burdensome for the CPU, so I've looked at moving this processing to the GPU and running custom CUDA kernels against them. The basic process is as follows:
1) acquire image from camera
2) load image onto GPU
3) call CUDA kernel to process image
4) display processed image
A WPF-to-CUDA-to-Display Control strategy is what I'm trying to figure out.
It seems natural that once the image is loaded onto the GPU that it would not have to be unloaded in order to be displayed. I've read that this can be done with OpenGL, but do I really need to learn OpenGL and include it in my project in order to do a fast display of a CUDA-processed image?
I understand (I think) the issues of calling CUDA kernels from C#. My plan is to either build an unmanaged library around my CUDA calls, which I later wrap for C# -- OR -- try to decide on which one of the managed wrappers (managedCUDA, Cudafy, etc.) to try. I worry about using one of the prebuilt wrappers because they all appear to be lightly supported...but maybe I have the wrong impression.
Anyway, I'm feeling a bit overwhelmed after days of researching the possible options. Any advice would be greatly appreciated.
The process of taking a result of CUDA computation and using it directly on the device for a graphics activity is called "interop". There is OpenGL "interop" and there is DirectX "interop". There are plenty of CUDA sample codes demonstrating how to interact with computed images.
To go directly from computed data on the device, to display, without a trip to the host, you will need to use one of these 2 APIs (OpenGL or DirectX).
You mentioned two of the managed interfaces I've heard of, so it seems like you're aware of the options there.
If the processing time is significant compared to (much larger than) the time taken to transfer the image from host to device, you might consider starting out by just transferring the image from host to device, processing it, and then transferring it back, where you can then use the same plumbing you have been using to display it. You can then decide if the additional effort for interop is worth it.
If you can profile your code to figure out how long the image processing takes on the host, and then prototype something on the device to find out how much faster it is, that will be instructive.
You may find that the processing time is so long you can even benefit from the double-copy arrangement. Or you may find the processing time is so short on the host (compared to just the cost to transfer to the device) that the CUDA acceleration would not be useful.
WPF has a control named D3DImage to directly show DirectX content on screen and in the managedCuda samples package you can find a version of the original fluids sample from Cuda Toolkit using it (together with SlimDX). You don’t have to use managedCuda to realize Cuda in C#, but you can take it to see how things can be realized: managedCuda samples

Are there any ARM based systems/emulators with a graphical frame buffer that allow for (relatively) legacy-free Assembly programming?

I am looking for a modern system to do some bare bones Assembly programming (for fun/learning) that does not have the legacy burden of x86 platforms (where you still have to deal with BIOS, switching to protected mode, VESA horrors to be able to output pixels to the screen in modern resolutions/colordepths etc.). Do such systems even exist? I suspect it is not even possible today to do low-level graphics programming without dealing with proprietary hardware.
qemu is likely what you want if you dont want to have to build that stuff in. You wont get as much visibility as to what is going on in the guts of it.
For hardware, beagleboard (dont get the old one get the new one with reasonable connectors, etc), or the open-rd board. I was disappointed with the plug computer thing. The hawkboard I like better than the beagleboard, but am concerned about the big banner about a pcb design problem. The raspberry pi will be out at some point and will also provide what you are looking for. Note that for beagleboard, etc, you dont have to run linux or anything like that, you can write your own binary and xmodem it over or use the network and then just run it, not a problem at all.
The stellaris eval boards all/most have oled displays, monochrome and small but graphics, not sure how much you were after.
earth-lcd used to have an arm based board with a decent sized panel on it.
there is of course the gameboy advance and the nintendo ds. flash/developer cartridges are under $20. the gba is better to start with IMO, as the nds is like two gbas competing for shared resources and a little confusing. with a ez flash cartridge (open source software to program), was easy to put a bootloader on the gba and for like another $20 create a serial cable, I have a serial based bootloader for loading the programs. If you have an interest in this path start with the visual boy advance emulator to get your feet wet and see how you feel about the platform.
If you go to sparkfun.com there are likely a number of boards that either already have lcd connectors that you would mate up with a display or definitely displays and breakout boards that you could connect to a number of microcontroller development boards. Other than the insanely painful blue leds, and the implication that there is 64KB (there is but non-linear 32KB+16+16) the mbed board is nice, up to 100mhz, cortex-m3. I have some mbed samples at github as well that walk you through building an arm binary too boot an arm from flash for those that have not done it (and want to learn that rather than call some apis in a sandbox).
the armmite pro and the maple (sparkfun) are arm based arduino footprint platforms, so for example you can get the color lcd shield or the gameduino
There is the open pandora project. I was quite dissappointed with the experience, after over a year paid another fee to get the unit and it failed within a few minutes. Sent it back and I need to check my credit card statement, maybe we took the return and give it to someone who wants it path. I have used the gamepark gp32 and gpx2, but not the wiz, the gpx2 was fine other than some memory I/O problem in the chip that caused chaotic timing. the thing would run just fine but memory performance was all over the map and non-deterministic. the gp32 is not what you are looking for but the gpx2 might be, finding connectors for a serial cable might be more difficult now that the cell phone cable folks used to cut up is not as readily available.
gen 1 ipod nanos can still be had easily, as well as the older gen ipod classics. easy to homebrew, the lcd panels are easy to get at. grayscale only, maybe only black and white I dont remember. All the programming info is had from the ipodlinux folks.
I have not tried it yet but the barns and noble folks are homebrew friendly or as friendly as anyone on that scale has been so far. the nook color can easily be turned into a generic android device, so I assume that also means you could develop homebrew on the metal, not sure though, have not studied it.
You might look at always innovating, my experience with them was similar to the open pandora folks. These folks started with a modified beagleboard in a box with a display and batteries, then added a couple more products, any one of them should be very open, and homebrew friendly so you can write whatever level you want, boot and run on the metal, no problem. For the original product it was one of those wait for several months things.
I am hoping the raspberry pi becomes the next beagleboard but better.
BTW all hardware is proprietary, it is just a matter of whether they choose to provide programming information or not. vesa came about because no two vendors did it the same way, and that has not changed, you have to still read the dataseets and programmers reference manuals. But as you can see above I have only scratched the surface, and covered the sub or close to $100 items. If you are willing to pay in the thousands of dollars that greatly opens the door to graphics based development platforms that are well documented and relatively sandbox free. many are arm based since arm is the choice for phones, etc and these are phone-like, tablet-like, eval platforms.
The Android emulator is such a beast; it runs a linux kernel and driver stack (including /dev/fb) that one can log into via the android debugger bridge, and run (statically linked) arm-linux-eabi applications. Framebuffer access is possible; see example.
The meta-question rather is, what do you mean by "low-level" graphics programming; no emulator is going to expose all the register and chip state complexity that's behind a modern graphics chip pipeline. But simple framebuffer contents manipulation (pixel buffer access) is surely simple enough, as is experimenting with software rendering in ARM assembly.
Of course, things that you can do with the Android emulator you can also do with cheap physical ARM hardware, like the beagleboard and similar. Real complexity only begins when you want to access "advanced" things - that's anything accelerated functionality beyond just reading/writing framebuffer contents.
New Answer
I recently came across this while looking for emulators to run NetBSD on, but there's a project called GXemul that provides a full-system computer architecture emulation with support for a variety of virtual devices and CPUs. The primary and most up-to-date core looks to be MIPS-based, but it also lists support for emulating the ARM architecture. It even includes an integrated debugger and it sounds like you can just assemble your code into a raw binary with some bootstrapping code and boot it as a kernel inside the emulator from the commandline.
Previous Answer
This isn't an emulator, but if you're interested in having a complete, ARM-based computer that you can develop whatever you want on that doesn't cost much, you should keep an eye on the Raspberry Pi project. They're very close to selling a complete, tiny, low-power ARM-based computer for $25 a piece. It has USB ports, ethernet, video out, and an SD card reader, and can boot Linux, although in your case you'd probably want to boot your own code and access the hardware directly.
EDIT: Looks like Erik already mentioned it.

Single application build for multiple mobile devices

Is it possible to have one application binary build for multiple mobile devices (on BREW platform), rather than making a separate build for each device using build script with conditional compilation.
In particular is is possible to use single BREW application build for multiple screen resolutions?
Note that the goal is to have a single binary build. If it would be just to have a single codebase, than conditional compilation and smart build script would do the trick.
Yes, it is possible, we were able to do this at my previous place of work. What's required is tricky though:
Compile for the lowest common denominator BREW version. Version 1.1 is the base for all current handsets out there.
Your code must be able to handle multiple resolutions. The methods for detecting screen width and height are accurate for all handsets in my experience.
All your resources must load on all devices. This would require making your own custom image loader to work around certain device issues. For sound, I know simple MIDI type 0 works on all but QCP should also work (no experience of it myself).
Use bitmap fonts. There are too many device issues with fonts to make it worthwhile using the system fonts.
Design your code structure as a finite state machine. I cannot emphasise this enough - do this and many, many problems never materialise.
Have workarounds for every single device issue. This is the hard part! It's possible but this rabbit hole is incredibly deep...
In the end, the more complex and advanced the application, the less likely you can go this route. Some device properties simply cannot be detected reliably at runtime (such as platform ID) and so multiple builds are then required.
I wrote a J2ME to Brew conversion that is used at Javaground. It is quite possible to write multiple resolution, single binary code. We have a database of device bugs so that it can detect via platform id the device and then generate a series of flags which mark which bugs are tagged. For example most (if not all) of the Motorola Brew phones have a bug where an incoming call does not interrupt the application until you answer the call, so I use TAPI to monitor for an incoming call and generate a hideNotify event (since we are emulating Java, although the generated code is pure C++). I do some checks at runtime for Brew version, and disable certain APIs if it is Brew 2 rather than Brew 3.
3D type games are easier to make resolution independent since you are scaling in software.
Also there are 2 separate APIs for sound, IMEDIA and ISOUNDPLAYER, ISOUNDPLAYER is the older API and is supported on all devices but doesn't have as many facilities (you can only do multichannel audio using IMEDIA). I create an IMEDIA object, and it will fall back to create an ISOUNDPLAYER object if it can't get the IMEDIA object.
The problem with a totally universal build is that there is a big difference in capability, so it can be worth having a few builds, the older devices only have under 1MB of heap (and a small screen size), and then you get a lot with 6MB+ (176x204 to larger).
With Brew you do have a fairly consistent set of key values (unlike Java), although some of the new devices are touch screen (and you have to handle pointer input) and rotating screens.
There are also some old Nokia phones that use big endian mode which mean the files are not the same as the normal mod files (UNLESS you want to write some REALLY cool assembly language prefix header that decodes the file)
Another idea might be to have the handsets divided into 2 to 4 categories based on say screen dimensions and create builds for them. It is a much faster route too as you will be able to support all the handsets you want to support with much lesser complexity.
Another thing to see is the BREW versions on the handsets you want to launch on. If say BREW 1.1 is on one handset and that is owned by a small percentage in your target market, it doesnt make sense to work to support it.

Resources