I was doing some .NET programming in WPF which involved reading .png and .txt files, placing images on a canvas, moving them around and then removing them and all of a sudden I get a bluescreen. I didn't think my little tinkering could cause a driver issue until I restarted and did the exact same thing with my program and I got another driver error. It seems that an intel graphics driver had failed and my resolution went way down. No bluescreen the second time, though. I was playing pretty fast and loose adding and removing elements from the children of the canvas. My question is, how could such simple programming cause such a serious error and how do I fix it?
As a first order approximation all bluescreens are caused by buggy drivers, and most driver bugs are race conditions. (see footnote 1) That is - it's not necessarily the specific operations, but the exact timing of them interacting with other things that are out of your control.
So if you can nail it down to a sequence of operations, then you can just avoid that sequence. but usually the only fix is to get better drivers - and not necessarily the latest drivers, sometimes its best to go back to an older set of drivers.
footnotes:
1 Of course, not all bluescreens are drivers, but they are all from kernel mode, it should never be possible for user mode code to make calls that crash in kernel mode, so when you hit a bluescreen, you have found a bug in some component that runs in kernel mode. A bluescreen is by definition not your bug (2) (or not only your bug).
2 unless you write drivers.
It sounds like you have a buggy graphics driver. WPF by itself cannot trigger a blue screen -- but WPF does call DirectX, which in turn calls the graphics driver -- and if the graphics driver contains bugs, those can cause a blue screen.
You cannot fix this at the WPF level, because WPF is innocent in this particular problem. You need to update your graphics driver. A possible workaround if there are no updates or the latest version doesn't fix it is to disable hardware acceleration for WPF, as this may avoid hitting the buggy bit of the driver.
Try disabling WPF hardware rendering (DirectX) and see if your problem goes away:
Graphics Rendering Registry Settings
Excerpt:
Disable Hardware Acceleration Option
HKEY_CURRENT_USER\SOFTWARE\Microsoft\Avalon.Graphics\DisableHWAcceleration
DWORD
The disable hardware acceleration option enables you to turn off hardware acceleration for debugging and test purposes. When you see rendering artifacts in an application, try turning off hardware acceleration. If the artifact disappears, the problem might be with your video driver.
The disable hardware acceleration option is a DWORD value that is either 0 or 1. A value of 1 disables hardware acceleration. A value of 0 enables hardware acceleration, provided the system meets hardware acceleration requirements; for more information, see Graphics Rendering Tiers.
Related
Recently, I started developing an operating system in NASM and C. I have already made a boot loader, kernel, filesystem, etc. So far I used the VGA text mode directly in order to write to the address 0x000B8000. So, I decided to switch to video mode instead of text mode. I chose maximal display resolution 320x200, but then I realised that there are three problems. Firstly, there are only 256 different colors. Secondly, the resolution is too small. Thirdly, writing to the address 0x000A0000 is too slow. I tried to do some animations, but it is very laggy and sometimes it waits more than one second before the next frame.
I have searched on the internet for some explanations on how to switch to higher resolutions such as 1920x1080 and how to use 256*256*256 colors instead of just 256. Everything I found said that it is very hard to use higher resolutions because you must develop drivers for all the different types of graphics cards and for some cards there are no documentations, so we must use reverse engineering.
I really want to introduce high-resolution graphics to my operating system. Is it really hard or is there any easy method? Any suggestions on how I can solve this?
Nearly every graphics adapter supports VESA framebuffer semantics, you can configure almost every video mode with that. The drawback is that you cannot use vendor specific features (accelerated graphics etc.)
The VESA-Xserver for example works with almost any graphics adapter (but the model specific ones are considerably faster)
See also: https://en.wikipedia.org/wiki/VESA_BIOS_Extensions
You can do high res VESA graphics in assembly and it should be fast enough (in the beginning phase when you are learning and not doing very fancy 3d stuff, especially).
First of all, make sure you are using a good emulator/virtual machine for testing. I was using QEMU and it was way to slow to do any graphics at only 640x480x24bpp. I switched to VirtualBox and though it starts up quite slowly, I have never looked back.
As for the programming part, I encourage you to look at a project called Pure64. You can find it on GitHub. Go to src/init/isa.asm and look at the end of the file - there is some code to do VESA initializations. I am actually using Pure64 to set up a clean 64bit environment and I am doing VESA graphics so I can say that it works fine.
The VESA init consists of two parts - getting mode info and setting the video mode. Once you get the mode info, you get a Video Base Pointer to a region of memory which is continuous and where you can write your pixels without switching banks and doing complicated stuff. At least in 64-bit mode.
The only problem I had with this is that I could not make 32bpp mode working. I can do 24bpp, which is RRGGBB - 3 bytes per pixel (exactly like HTML/CSS color codes). As with everything that comprises of 3 bytes on a binary computer, this makes some things a bit more complex (at least for a beginner). Getting 4 bytes per pixel to work still eludes me. Maybe this is a limitation of VirtualBox or something.
This all means that for basic hi-res graphics there is no need to do a lot of hardware-specific things. If you are on a mildly current hardware, you should do fine.
Is there any way to make a Cinema (or any type of) Display in C?
I have seen some code but there is almost no documentation.
Implementing IOFramebuffer, like EWFrameBuffer etc. does is the way to go for creating a graphics driver. There is a little bit of breakage in various versions, but it's possible to get things working nicely, including retina resolutions, with some trial and error. Hardware acceleration is separate:
Older versions of OSX used the IOGraphicsAcceleratorInterface for 2D acceleration if your driver provided a CFPlugin bundle that implemented it together with your kext.
I haven't figured it out on Yosemite; it seems that it doesn't use 2D acceleration. To make things worse, software rendering performance is also considerably worse on Yosemite than on previous releases. I encourage anyone who is affected by this (headless mac mini, OS X in VMs, virtual displays, etc.) to file a Radar with Apple. I have already done so, but the more people complain, the more likely it is that they'll do something about it.
The 3D acceleration (OpenGL) APIs are private on all versions. I'm not aware of any 3rd party implementation of them, open source or otherwise, unless you count the Intel/AMD/nVidia GPU drivers, which seem to be developed in cooperation between Apple and the relevant company.
UPDATE: It turns out that Yosemite's WindowServer limits frame rates to about 8fps unless your IOFramebuffer driver correctly implements vertical blank interrupts. So if your driver doesn't already do so, implement the methods registerForInterruptType(), unregisterInterrupt and setInterruptState work with interrupt type kIOFBVBLInterruptType, and generate a callback every time you finish emitting a full image. The details of this will depend on your device (or lack thereof). This doesn't solve the hardware acceleration and rendering glitch issues, but it does at least improve performance somewhat (at the cost of higher CPU load).
I am sorry if it appears this question has been done to death. I've done plenty of research however, and it seems there is no well known solution to what seems a simple problem: take a screenshot in windows.
There's a catch of course - the screenshot is to be manipulated in some way (gpu side, with shaders/etc), so there is no option of a slow copy to system memory. Instead the copy must somehow stay in graphics memory. GetFrontBuffer and the like a limited in this sense (they don't work full stop, I've checked).
I am aware of several closed questions on the stack exchange network ("Don't bother, not possible") and 2 with open bounties that amount to solving this problem.
Windows 7 introduces some changes to the graphics system, so now there is a compositing window manager,etc. Apparently GDI is also now 'hardware accelerated' so I was hoping this would have exposed a simple path for a possible solution:
gdi desktop window device context (in gpu memory) -> some direct2d or direct3d surface
In my particular case, just getting the DC in gpu memory is sufficient, but I am looking for a general solution.
So, how does one screnshot gpu side in Windows?
I am looking for a modern system to do some bare bones Assembly programming (for fun/learning) that does not have the legacy burden of x86 platforms (where you still have to deal with BIOS, switching to protected mode, VESA horrors to be able to output pixels to the screen in modern resolutions/colordepths etc.). Do such systems even exist? I suspect it is not even possible today to do low-level graphics programming without dealing with proprietary hardware.
qemu is likely what you want if you dont want to have to build that stuff in. You wont get as much visibility as to what is going on in the guts of it.
For hardware, beagleboard (dont get the old one get the new one with reasonable connectors, etc), or the open-rd board. I was disappointed with the plug computer thing. The hawkboard I like better than the beagleboard, but am concerned about the big banner about a pcb design problem. The raspberry pi will be out at some point and will also provide what you are looking for. Note that for beagleboard, etc, you dont have to run linux or anything like that, you can write your own binary and xmodem it over or use the network and then just run it, not a problem at all.
The stellaris eval boards all/most have oled displays, monochrome and small but graphics, not sure how much you were after.
earth-lcd used to have an arm based board with a decent sized panel on it.
there is of course the gameboy advance and the nintendo ds. flash/developer cartridges are under $20. the gba is better to start with IMO, as the nds is like two gbas competing for shared resources and a little confusing. with a ez flash cartridge (open source software to program), was easy to put a bootloader on the gba and for like another $20 create a serial cable, I have a serial based bootloader for loading the programs. If you have an interest in this path start with the visual boy advance emulator to get your feet wet and see how you feel about the platform.
If you go to sparkfun.com there are likely a number of boards that either already have lcd connectors that you would mate up with a display or definitely displays and breakout boards that you could connect to a number of microcontroller development boards. Other than the insanely painful blue leds, and the implication that there is 64KB (there is but non-linear 32KB+16+16) the mbed board is nice, up to 100mhz, cortex-m3. I have some mbed samples at github as well that walk you through building an arm binary too boot an arm from flash for those that have not done it (and want to learn that rather than call some apis in a sandbox).
the armmite pro and the maple (sparkfun) are arm based arduino footprint platforms, so for example you can get the color lcd shield or the gameduino
There is the open pandora project. I was quite dissappointed with the experience, after over a year paid another fee to get the unit and it failed within a few minutes. Sent it back and I need to check my credit card statement, maybe we took the return and give it to someone who wants it path. I have used the gamepark gp32 and gpx2, but not the wiz, the gpx2 was fine other than some memory I/O problem in the chip that caused chaotic timing. the thing would run just fine but memory performance was all over the map and non-deterministic. the gp32 is not what you are looking for but the gpx2 might be, finding connectors for a serial cable might be more difficult now that the cell phone cable folks used to cut up is not as readily available.
gen 1 ipod nanos can still be had easily, as well as the older gen ipod classics. easy to homebrew, the lcd panels are easy to get at. grayscale only, maybe only black and white I dont remember. All the programming info is had from the ipodlinux folks.
I have not tried it yet but the barns and noble folks are homebrew friendly or as friendly as anyone on that scale has been so far. the nook color can easily be turned into a generic android device, so I assume that also means you could develop homebrew on the metal, not sure though, have not studied it.
You might look at always innovating, my experience with them was similar to the open pandora folks. These folks started with a modified beagleboard in a box with a display and batteries, then added a couple more products, any one of them should be very open, and homebrew friendly so you can write whatever level you want, boot and run on the metal, no problem. For the original product it was one of those wait for several months things.
I am hoping the raspberry pi becomes the next beagleboard but better.
BTW all hardware is proprietary, it is just a matter of whether they choose to provide programming information or not. vesa came about because no two vendors did it the same way, and that has not changed, you have to still read the dataseets and programmers reference manuals. But as you can see above I have only scratched the surface, and covered the sub or close to $100 items. If you are willing to pay in the thousands of dollars that greatly opens the door to graphics based development platforms that are well documented and relatively sandbox free. many are arm based since arm is the choice for phones, etc and these are phone-like, tablet-like, eval platforms.
The Android emulator is such a beast; it runs a linux kernel and driver stack (including /dev/fb) that one can log into via the android debugger bridge, and run (statically linked) arm-linux-eabi applications. Framebuffer access is possible; see example.
The meta-question rather is, what do you mean by "low-level" graphics programming; no emulator is going to expose all the register and chip state complexity that's behind a modern graphics chip pipeline. But simple framebuffer contents manipulation (pixel buffer access) is surely simple enough, as is experimenting with software rendering in ARM assembly.
Of course, things that you can do with the Android emulator you can also do with cheap physical ARM hardware, like the beagleboard and similar. Real complexity only begins when you want to access "advanced" things - that's anything accelerated functionality beyond just reading/writing framebuffer contents.
New Answer
I recently came across this while looking for emulators to run NetBSD on, but there's a project called GXemul that provides a full-system computer architecture emulation with support for a variety of virtual devices and CPUs. The primary and most up-to-date core looks to be MIPS-based, but it also lists support for emulating the ARM architecture. It even includes an integrated debugger and it sounds like you can just assemble your code into a raw binary with some bootstrapping code and boot it as a kernel inside the emulator from the commandline.
Previous Answer
This isn't an emulator, but if you're interested in having a complete, ARM-based computer that you can develop whatever you want on that doesn't cost much, you should keep an eye on the Raspberry Pi project. They're very close to selling a complete, tiny, low-power ARM-based computer for $25 a piece. It has USB ports, ethernet, video out, and an SD card reader, and can boot Linux, although in your case you'd probably want to boot your own code and access the hardware directly.
EDIT: Looks like Erik already mentioned it.
My .Net Winforms application creates three OpenGL rendering contexts in my main window, and then allows the user to popup other windows where each window has two more rendering contexts (using a splitter). At around the 26th rendering context, things start to go REALLY slow. Instead of taking a few milliseconds to render a frame, the new rendering context takes between 5 and 10 seconds. It still works, just REALLY SLOW! And OpenGL does NOT return any errors (glGetError).
The other windows work fine. Just the new rendering contexts after a certain number slow down. If I close those windows, everything is fine -- until I reopen enough windows to pass the limit. Each rendering context has its own thread, and each one uses a simple shader. The slow down appears to happen when I upload a texture. But the size of the texture has no effect on how many contexts I can create, nor does the size of the OpenGL window.
I'm running on nVidia cards and see this on different GPU's with different amounts of memory and different driver versions. What's the deal? Is there some limit to how many rendering contexts an application can create?
Does anyone else have an application with LOTS of rendering contexts going at the same time?
As Nathan Kidd correctly said, the limit is implementation-specific, and all you can do is to run some tests on common hardware.
I was bored at today's department meeting, so i tried to piece together a bit of code which creates OpenGL contexts and tries some rendering. I tried rendering with and without textures, with and without forward-compatible OpenGL context.
It turned out that the limit is pretty high for GeForce cards (maybe even no limit). For desktop Quadro, there was limit of 128 contexts that were able to repaint correctly, the program was able to create 128 more contexts with no errors, but the windows contained rubbish.
It was even more interesting on ATi Radeon 6950, there the redrawing stopped at window #105, and creating rendering context #200 failed.
If you want to try for yourself, the program can be found here: Max OpenGL Contexts test (there is full source code + win32 binaries).
That's the result. One piece of advice - avoid using multiple contexts where possible. Multiple contexts can be understood in application running at mulitple monitors, but applications on a single monitor should resort to a single context. Context switching is slow. And that's not all. Applications where OpenGL windows are overlapped by another windows require hardware clipping regions. There is one hardware clipping region on GeForce, eight or more on Quadro (CAD applications often use windows and menus that overlap OpenGL window, in contrast with games). In case more regions are needed, rendering falls back to software - so again - having lots of OpenGL windows (contexts) is not a very good idea.
The best bet is that there is no real answer to this question. It probably depends on some internal limitation of the driver, hardware of even the OS. Something you might want to try to check is the number of available texture units using glGet(GL_MAX_TEXTURE_UNITS) but that may or may not be indicative.
A common solution to avoid this is to create multiple viewports within a single context rather than multiple contexts in a single window. It shouldn't be too hard to unite the two contextes that share a window to a single context with two viewports and some kind of UI widget to serve as the splitter. Multiple windows are a different story and you may want to consider completely re-thinking your UI design if there is an actual need for 26 separate OpenGL windows.
It's hard for me right now to think of a real UI use case that would actually require 26 different OpenGL windows operating simultaneously. maybe another option is to create a pool of say 5-10 contexts and reuse them only in the windows (tabs?) that are currently visible to the user. I didn't try it but it should be possible to create a context inside a plain window that contain nothing else and then move that window from parent window to parent window to whichever top-level window it is needed in.
EDIT -
Well, actually it's not that hard to think of one. The latest Chrome (9.x.x), supporting WebGL may want to open many tabs each with a WebGL context... I wonder if they handle this in any way. Just tried it and ran out of memory after 13 tabs... That would actually be a good check for you as well to see if its something you're doing wrong or if chrome and firefox (4.0.x-beta) have the same problem
Given the diverse nature of OpenGL drivers, your best bet is probably to check the behavior of the major drivers (AMD / Intel / NVIDIA / MS Software Render) and on first startup run a test. E.g. if you can see that NVIDIA always slows down like you saw, then just run a quick loop till you see where the limit is on that machine (or rather, card). It's not much fun, but I think it's pretty hard to reliably push the limits otherwise.
In other words "best bet" is just like previously answered, you can't know beforehand.
If you go through that much trouble to set OpenGL up in a over-the-top multi-threaded fashion, you could as well benefit from it and consider switching to Vulkan. See, by design, the OpenGL architecture funnels all the hard earned context/thread separated drawing operations into one single driver thread that then redistributes all these calls acrosss virtual hardware threads that map onto each context. The driver is in essence a huge bottleneck because it is not itself threaded, despite any glewmx sitting aroung. It is simply not designed to handle this well.
That said, I am curious if you used an older version of Glew, or if you do all the extension handling in some other way, since latest glew libs no longer support mx. One more reason to switch.