Kind of like this unanswered question: Calling a method on screen refresh?
Instead of dealing specifically with C#, though, I want to know how one could possibly do this through any API exposed through C. I'm using GTK+ via D, but I'm okay with adding hooks to any other library exposing a C API.
Before anyone starts yelling at me about the applicability of this: I'm trying to perform visual stimulation using an LCD screen rather than a wall of LEDs attached to some crystal oscillator (it's easier to use readily available 59.94 Hz screens than constructing LED walls). To even begin to approach the flexibility provided by analog circuitry, though... no frame should be skipped, EVER (or at least only very very rarely).
New versions of GTK (3.8 onwards) introduced a GdkFrameClock API, for syncing drawing (of animations, etc) to vertical blanks. From a cursory look, it seems like it might be what you're after.
Docs: https://developer.gnome.org/gdk3/stable/gdk3-GdkFrameClock.html
Related
Way way back in the day Itried to learn C from a game programming book. If I recall correctly, one of the first things your game "engine" would do would be to switch display modes to render. This involved a bit of asm to switch to a 640x480 display mode (mode 13 maybe?) so you could draw directly to the screen. Something like that.
My question is, what is the modern equivalent of this? I'm interested in writing a command line program that does something similar; drops into some kind of raster mode for me to draw to, but, I do not assume that my program would be running under some kind of window manager like kde, unity,Aqua etc.
Would this be something that OpenGL could provide (or does OpenGL assume a window manager too). My proposed program isn't a game, but would ideally start with a basic clear screen that I can draw primitives (2d lines, circles rects etc)
Cheers!
Modern operating systems don't give programmers as convenient access to low-level graphics routines as they used to. Partially, this is due to the advent of the GPU, which makes utilizing the graphics hardware a much more significant challenge than if you only had a CPU. The other reason is as window managers have gotten more and more complex, the graphical sandbox each operating system gives a programmer is more constrained.
That being said, OpenGL is definitely worth looking at. Its cross-platform, versatile, and automatically utilizes any hardware available (including the graphics card). OpenGL itself doesn't directly provide access to a windowing context, but you can easily create that with the OpenGL utility library (GLUT). OpenGL is however very low-level, you'll have to deal with frame buffers and flushing and bit masks and all sorts of low-level nonsense that can make OpenGL development a nightmare if you haven't done it before.
If I were starting a project, I would probably want a more robust graphics environment that provides drawing functions and windowing out of the box. Both SDL and SFML provide low-level graphics APIs that will be a little more friendly to start with. They are both implemented on top of OpenGL so you can use any of the OpenGL features when you want to, but you don't have to worry about some of the more tedious details.
As a side note, C might not be the best language to get started with graphics programming these days. If you want a really simple graphics environment that is becoming more relevant every day, you might want to check out what the web has to provide. JavaScript and the HTML5Canvas provide a very simple interface for drawing primitives, images, etc.
I have a WPF application that acquires images from a camera, processes these images, and displays them. The processing part has become burdensome for the CPU, so I've looked at moving this processing to the GPU and running custom CUDA kernels against them. The basic process is as follows:
1) acquire image from camera
2) load image onto GPU
3) call CUDA kernel to process image
4) display processed image
A WPF-to-CUDA-to-Display Control strategy is what I'm trying to figure out.
It seems natural that once the image is loaded onto the GPU that it would not have to be unloaded in order to be displayed. I've read that this can be done with OpenGL, but do I really need to learn OpenGL and include it in my project in order to do a fast display of a CUDA-processed image?
I understand (I think) the issues of calling CUDA kernels from C#. My plan is to either build an unmanaged library around my CUDA calls, which I later wrap for C# -- OR -- try to decide on which one of the managed wrappers (managedCUDA, Cudafy, etc.) to try. I worry about using one of the prebuilt wrappers because they all appear to be lightly supported...but maybe I have the wrong impression.
Anyway, I'm feeling a bit overwhelmed after days of researching the possible options. Any advice would be greatly appreciated.
The process of taking a result of CUDA computation and using it directly on the device for a graphics activity is called "interop". There is OpenGL "interop" and there is DirectX "interop". There are plenty of CUDA sample codes demonstrating how to interact with computed images.
To go directly from computed data on the device, to display, without a trip to the host, you will need to use one of these 2 APIs (OpenGL or DirectX).
You mentioned two of the managed interfaces I've heard of, so it seems like you're aware of the options there.
If the processing time is significant compared to (much larger than) the time taken to transfer the image from host to device, you might consider starting out by just transferring the image from host to device, processing it, and then transferring it back, where you can then use the same plumbing you have been using to display it. You can then decide if the additional effort for interop is worth it.
If you can profile your code to figure out how long the image processing takes on the host, and then prototype something on the device to find out how much faster it is, that will be instructive.
You may find that the processing time is so long you can even benefit from the double-copy arrangement. Or you may find the processing time is so short on the host (compared to just the cost to transfer to the device) that the CUDA acceleration would not be useful.
WPF has a control named D3DImage to directly show DirectX content on screen and in the managedCuda samples package you can find a version of the original fluids sample from Cuda Toolkit using it (together with SlimDX). You don’t have to use managedCuda to realize Cuda in C#, but you can take it to see how things can be realized: managedCuda samples
I am developing an application that stacks multiple frames captured from a CCD camera. The frames are meant to be "aligned" or registered before stacking them. The initial aim is to ask the user for the relevant control points and then figure out if the frames need rotation and/or translation. Eventually, perhaps in the next version, I'd like to be able to detect the stars and cross-reference them in all the frames automatically.
My question is, is there a library that I can employ to register these images i.e. translate and/or rotate? I am using Xcode on Lion and would really prefer a library meant for Cocoa but anything written in C would be fine as well.
A tracking library such as libmv might work for this, it sounds like a tracking application.
My .Net Winforms application creates three OpenGL rendering contexts in my main window, and then allows the user to popup other windows where each window has two more rendering contexts (using a splitter). At around the 26th rendering context, things start to go REALLY slow. Instead of taking a few milliseconds to render a frame, the new rendering context takes between 5 and 10 seconds. It still works, just REALLY SLOW! And OpenGL does NOT return any errors (glGetError).
The other windows work fine. Just the new rendering contexts after a certain number slow down. If I close those windows, everything is fine -- until I reopen enough windows to pass the limit. Each rendering context has its own thread, and each one uses a simple shader. The slow down appears to happen when I upload a texture. But the size of the texture has no effect on how many contexts I can create, nor does the size of the OpenGL window.
I'm running on nVidia cards and see this on different GPU's with different amounts of memory and different driver versions. What's the deal? Is there some limit to how many rendering contexts an application can create?
Does anyone else have an application with LOTS of rendering contexts going at the same time?
As Nathan Kidd correctly said, the limit is implementation-specific, and all you can do is to run some tests on common hardware.
I was bored at today's department meeting, so i tried to piece together a bit of code which creates OpenGL contexts and tries some rendering. I tried rendering with and without textures, with and without forward-compatible OpenGL context.
It turned out that the limit is pretty high for GeForce cards (maybe even no limit). For desktop Quadro, there was limit of 128 contexts that were able to repaint correctly, the program was able to create 128 more contexts with no errors, but the windows contained rubbish.
It was even more interesting on ATi Radeon 6950, there the redrawing stopped at window #105, and creating rendering context #200 failed.
If you want to try for yourself, the program can be found here: Max OpenGL Contexts test (there is full source code + win32 binaries).
That's the result. One piece of advice - avoid using multiple contexts where possible. Multiple contexts can be understood in application running at mulitple monitors, but applications on a single monitor should resort to a single context. Context switching is slow. And that's not all. Applications where OpenGL windows are overlapped by another windows require hardware clipping regions. There is one hardware clipping region on GeForce, eight or more on Quadro (CAD applications often use windows and menus that overlap OpenGL window, in contrast with games). In case more regions are needed, rendering falls back to software - so again - having lots of OpenGL windows (contexts) is not a very good idea.
The best bet is that there is no real answer to this question. It probably depends on some internal limitation of the driver, hardware of even the OS. Something you might want to try to check is the number of available texture units using glGet(GL_MAX_TEXTURE_UNITS) but that may or may not be indicative.
A common solution to avoid this is to create multiple viewports within a single context rather than multiple contexts in a single window. It shouldn't be too hard to unite the two contextes that share a window to a single context with two viewports and some kind of UI widget to serve as the splitter. Multiple windows are a different story and you may want to consider completely re-thinking your UI design if there is an actual need for 26 separate OpenGL windows.
It's hard for me right now to think of a real UI use case that would actually require 26 different OpenGL windows operating simultaneously. maybe another option is to create a pool of say 5-10 contexts and reuse them only in the windows (tabs?) that are currently visible to the user. I didn't try it but it should be possible to create a context inside a plain window that contain nothing else and then move that window from parent window to parent window to whichever top-level window it is needed in.
EDIT -
Well, actually it's not that hard to think of one. The latest Chrome (9.x.x), supporting WebGL may want to open many tabs each with a WebGL context... I wonder if they handle this in any way. Just tried it and ran out of memory after 13 tabs... That would actually be a good check for you as well to see if its something you're doing wrong or if chrome and firefox (4.0.x-beta) have the same problem
Given the diverse nature of OpenGL drivers, your best bet is probably to check the behavior of the major drivers (AMD / Intel / NVIDIA / MS Software Render) and on first startup run a test. E.g. if you can see that NVIDIA always slows down like you saw, then just run a quick loop till you see where the limit is on that machine (or rather, card). It's not much fun, but I think it's pretty hard to reliably push the limits otherwise.
In other words "best bet" is just like previously answered, you can't know beforehand.
If you go through that much trouble to set OpenGL up in a over-the-top multi-threaded fashion, you could as well benefit from it and consider switching to Vulkan. See, by design, the OpenGL architecture funnels all the hard earned context/thread separated drawing operations into one single driver thread that then redistributes all these calls acrosss virtual hardware threads that map onto each context. The driver is in essence a huge bottleneck because it is not itself threaded, despite any glewmx sitting aroung. It is simply not designed to handle this well.
That said, I am curious if you used an older version of Glew, or if you do all the extension handling in some other way, since latest glew libs no longer support mx. One more reason to switch.
I am looking for a raster graphics framework for Mac OS X. Specifically, I want some kind of view that I can manipulate (at least conceptually) like a matrix of pixels. My program will generate the pixel data programmatically.
QuickDraw fits that description nicely, but is deprecated. As far as I can tell, there is nothing equivalent in Core Graphics. Am I missing something?
A plain C framework would be preferable to an Objective-C one, but I'm not too fussy.
QD was deprecated because there is no way to do implement it efficiently with the the current generation of fully composited UIs and GPU HW. For that reason there is nothing quite like QD on the system, and there won't be. Allowing direct access to the backing store forces at best forces a lot more bus transactions to and from the GPU, and at worst may prevent a texture from being loaded on to the card itself, and some cases may cause software fallbacks.
It is clear there are sometimes reasons people need pixel level access to a backing store, so there are some mechanisms to do it, but there are no real convenience methods and if you can find some way to avoid it you should. If you can't avoid it you can use CoreGraphics to create a bitmap context using CGBitmapContextCreate where you have access to the backing store and can manipulate the backing store directly. It is not simple to work with, and it is slow.
What about dividing the width and height of the view by itself, then draw width x height squares? You could just use an NSPoint and increase it by one until it hits width x height.
The Simple Directmedia Layer has pixel access. It may be over kill as it is a porting library, but the entire API is in plain C. I do not know what it uses as an underlying MacOS API uses. Best to check the website to see if it is suitable for your purposes.
Alternatively, you could use OpenGL textures.
The best way to do this is Core Image. It's designed for working with pixels, and it's very fast because it lets you do the work on the graphics card.