Star/Point based Image Registration - c

I am developing an application that stacks multiple frames captured from a CCD camera. The frames are meant to be "aligned" or registered before stacking them. The initial aim is to ask the user for the relevant control points and then figure out if the frames need rotation and/or translation. Eventually, perhaps in the next version, I'd like to be able to detect the stars and cross-reference them in all the frames automatically.
My question is, is there a library that I can employ to register these images i.e. translate and/or rotate? I am using Xcode on Lion and would really prefer a library meant for Cocoa but anything written in C would be fine as well.

A tracking library such as libmv might work for this, it sounds like a tracking application.

Related

How to directly access the display for drawing

Context
I've been programming mainly as a hobby for some time now, mostly in C# and Java. I made many application (Windows Forms or Java Forms) that required animated content. In Java I would use Graphics.drawX() and redraw in function of time. When the animations were happening frequently the resolution would diminish or the application would slow down. I never gave it thought until I played a video game on the same computer that had so much trouble rendering a simple Java app. How can my computer instantly render a complex moving environment but rush a displaying a home-made 2048 game? I figured it must be because either I am misusing the draw functions, either because those functions are not optized for real-time render.
Question :
How can I directly access the display without having to go through preprogrammed functions?
I realize this maybe hard in higher level languages so let's say in C on a Windows OS. (But I would appreciate any answer relating to any language and/or OS)
I know it's a really vague question but I can't seem to find the right words to Google it appropriatly. Thank you very much for your help!
You can't (or maybe I should say should never) try to access the graphics driver directly on Windows. You used to have write directly to video memory to do graphics prior to Windows as DOS did not support graphics or display management and the stability of those programs were always a bit dicey. On Windows, it owns the screen and you have to work through it to access it.
The very concept of a Windows-based OS is that the OS owns the display and gives application access to a virtual display so that the OS can hide it or move it around. In most cases this does not cause a speed problem; but, in certain cases like gaming you need more speed; so, DirectX allows you tor transfer some of those task to the graphics card to get you the speed you need.
For more info on DirectX, check out Microsoft's Graphics and Gaming Resources

Displaying CUDA-processed images in WPF

I have a WPF application that acquires images from a camera, processes these images, and displays them. The processing part has become burdensome for the CPU, so I've looked at moving this processing to the GPU and running custom CUDA kernels against them. The basic process is as follows:
1) acquire image from camera
2) load image onto GPU
3) call CUDA kernel to process image
4) display processed image
A WPF-to-CUDA-to-Display Control strategy is what I'm trying to figure out.
It seems natural that once the image is loaded onto the GPU that it would not have to be unloaded in order to be displayed. I've read that this can be done with OpenGL, but do I really need to learn OpenGL and include it in my project in order to do a fast display of a CUDA-processed image?
I understand (I think) the issues of calling CUDA kernels from C#. My plan is to either build an unmanaged library around my CUDA calls, which I later wrap for C# -- OR -- try to decide on which one of the managed wrappers (managedCUDA, Cudafy, etc.) to try. I worry about using one of the prebuilt wrappers because they all appear to be lightly supported...but maybe I have the wrong impression.
Anyway, I'm feeling a bit overwhelmed after days of researching the possible options. Any advice would be greatly appreciated.
The process of taking a result of CUDA computation and using it directly on the device for a graphics activity is called "interop". There is OpenGL "interop" and there is DirectX "interop". There are plenty of CUDA sample codes demonstrating how to interact with computed images.
To go directly from computed data on the device, to display, without a trip to the host, you will need to use one of these 2 APIs (OpenGL or DirectX).
You mentioned two of the managed interfaces I've heard of, so it seems like you're aware of the options there.
If the processing time is significant compared to (much larger than) the time taken to transfer the image from host to device, you might consider starting out by just transferring the image from host to device, processing it, and then transferring it back, where you can then use the same plumbing you have been using to display it. You can then decide if the additional effort for interop is worth it.
If you can profile your code to figure out how long the image processing takes on the host, and then prototype something on the device to find out how much faster it is, that will be instructive.
You may find that the processing time is so long you can even benefit from the double-copy arrangement. Or you may find the processing time is so short on the host (compared to just the cost to transfer to the device) that the CUDA acceleration would not be useful.
WPF has a control named D3DImage to directly show DirectX content on screen and in the managedCuda samples package you can find a version of the original fluids sample from Cuda Toolkit using it (together with SlimDX). You don’t have to use managedCuda to realize Cuda in C#, but you can take it to see how things can be realized: managedCuda samples

Calling a function on screen refresh

Kind of like this unanswered question: Calling a method on screen refresh?
Instead of dealing specifically with C#, though, I want to know how one could possibly do this through any API exposed through C. I'm using GTK+ via D, but I'm okay with adding hooks to any other library exposing a C API.
Before anyone starts yelling at me about the applicability of this: I'm trying to perform visual stimulation using an LCD screen rather than a wall of LEDs attached to some crystal oscillator (it's easier to use readily available 59.94 Hz screens than constructing LED walls). To even begin to approach the flexibility provided by analog circuitry, though... no frame should be skipped, EVER (or at least only very very rarely).
New versions of GTK (3.8 onwards) introduced a GdkFrameClock API, for syncing drawing (of animations, etc) to vertical blanks. From a cursory look, it seems like it might be what you're after.
Docs: https://developer.gnome.org/gdk3/stable/gdk3-GdkFrameClock.html

Visualizing Molecular Dynamics Simulations in 3D

I've been working with some legacy C Molecular Dynamics code, and it came with it's own home-grown visualization routines. I am wondering if there isn't something better and more flexible that I can use, since I am reaching the limitations of the current approach.
The current routines are all using OpenGL, and then drawn via X11 (I'm on a mac most of the time). There is a problem related to this where I can display the simulations while they are running on linux, but capturing them returns a black screen.
My basic problems with this are that:
I don't have any experience with OpenGL or X11, or even graphics for that matter.
Adding in new objects to draw is hard.
So my options are to learn OpenGL and X11, and figure out what is going on, or try something else. I do write out the information that the movies duplicate into a binary file as the sims run, and I can always read that in and create the movies later.
What I need is the ability to:
Create various basic geometric objects in 3D
Having the ability for them to change color based on orientation, etc, would be nice
Be able to generate the movies in .mov format (I use ffmpeg on .bmp files right now)
Be able to manipulate the perspective for either 3D (be able to rotate the image), or to be able to have multiple perspectives simultaneously (polar projection, side view, etc)
I see that some of this was covered here, which I am going to take a look at, but I really want something that I could use real-time as the sims run to inspect what is going on for bugs, etc.

How to minimize to system tray in C

How can I minimize my app to the system tray as soon as it starts in C?
I am new to C.
Thanks.
Are you talking about Windows and the taskbar status area? If so, check http://msdn.microsoft.com/en-us/library/windows/desktop/bb762159.aspx for the Shell_NotifyIcon function. There are plenty of references, and even some samples linked on how to use it.
Also Notifications and the Notification Area: http://msdn.microsoft.com/en-us/library/windows/desktop/ee330740.aspx
C, all by itself, is not capable of doing what you want. The language was designed to work on as many possible architectures as possible (microwave ovens ... air bag systems ... mouse movement control ...) and not all such architectures know what a "system tray" is.
You need to use specific libraries (which augment the capabilities of Standard C). There are lots and lots (and lots) of external libraries. Most libraries to do the same thing on different platforms are not compatible between each other ... so we need to know what is the target of your code: Windows? Windows Vista? DOS? microwave oven? sattelite solar panel deployer? ... :-)
Create a window but don't show it.
Use Shell_NotifyIcon to create the icon in the notification area.
In order to perform step 2 you will need the window created in step 1.
If you have never programmed in C before and never used the Win32 API before this is an ambitious first project. First of all you should master the basics of showing windows, programming a message loop, handling messages etc. I recommend Programming Windows by Petzold.

Resources