Visualizing Molecular Dynamics Simulations in 3D - c

I've been working with some legacy C Molecular Dynamics code, and it came with it's own home-grown visualization routines. I am wondering if there isn't something better and more flexible that I can use, since I am reaching the limitations of the current approach.
The current routines are all using OpenGL, and then drawn via X11 (I'm on a mac most of the time). There is a problem related to this where I can display the simulations while they are running on linux, but capturing them returns a black screen.
My basic problems with this are that:
I don't have any experience with OpenGL or X11, or even graphics for that matter.
Adding in new objects to draw is hard.
So my options are to learn OpenGL and X11, and figure out what is going on, or try something else. I do write out the information that the movies duplicate into a binary file as the sims run, and I can always read that in and create the movies later.
What I need is the ability to:
Create various basic geometric objects in 3D
Having the ability for them to change color based on orientation, etc, would be nice
Be able to generate the movies in .mov format (I use ffmpeg on .bmp files right now)
Be able to manipulate the perspective for either 3D (be able to rotate the image), or to be able to have multiple perspectives simultaneously (polar projection, side view, etc)
I see that some of this was covered here, which I am going to take a look at, but I really want something that I could use real-time as the sims run to inspect what is going on for bugs, etc.

Related

Changing display modes from the command line

Way way back in the day Itried to learn C from a game programming book. If I recall correctly, one of the first things your game "engine" would do would be to switch display modes to render. This involved a bit of asm to switch to a 640x480 display mode (mode 13 maybe?) so you could draw directly to the screen. Something like that.
My question is, what is the modern equivalent of this? I'm interested in writing a command line program that does something similar; drops into some kind of raster mode for me to draw to, but, I do not assume that my program would be running under some kind of window manager like kde, unity,Aqua etc.
Would this be something that OpenGL could provide (or does OpenGL assume a window manager too). My proposed program isn't a game, but would ideally start with a basic clear screen that I can draw primitives (2d lines, circles rects etc)
Cheers!
Modern operating systems don't give programmers as convenient access to low-level graphics routines as they used to. Partially, this is due to the advent of the GPU, which makes utilizing the graphics hardware a much more significant challenge than if you only had a CPU. The other reason is as window managers have gotten more and more complex, the graphical sandbox each operating system gives a programmer is more constrained.
That being said, OpenGL is definitely worth looking at. Its cross-platform, versatile, and automatically utilizes any hardware available (including the graphics card). OpenGL itself doesn't directly provide access to a windowing context, but you can easily create that with the OpenGL utility library (GLUT). OpenGL is however very low-level, you'll have to deal with frame buffers and flushing and bit masks and all sorts of low-level nonsense that can make OpenGL development a nightmare if you haven't done it before.
If I were starting a project, I would probably want a more robust graphics environment that provides drawing functions and windowing out of the box. Both SDL and SFML provide low-level graphics APIs that will be a little more friendly to start with. They are both implemented on top of OpenGL so you can use any of the OpenGL features when you want to, but you don't have to worry about some of the more tedious details.
As a side note, C might not be the best language to get started with graphics programming these days. If you want a really simple graphics environment that is becoming more relevant every day, you might want to check out what the web has to provide. JavaScript and the HTML5Canvas provide a very simple interface for drawing primitives, images, etc.

SetPixel equivalent on mac?

I'm currently writing a software renderer and after i got it to kind of work on windows I began thinking about porting it to Mac.
My Question therefore is: What's the equivalent to the Win32 GDI SetPixel function?
All I need to be able to do is plot a pixel at (x,y).
I'm new to mac development and the closest thing I found resembling an answer was to use an OpenGL Texture to which one would draw to. But that kind of defeats the point of having software rendering if i have to use OpenGL...
Is it even possible to plot single pixels in osx?
Short answer
Just use LibSDL (preferably version 2.0).
Long answer
You have to get your pixel data from system memory to graphics memory anyway. One way to do that is with OpenGL. You can think of OpenGL as a fancy API which lets you push data from system memory to graphics memory. Since that's exactly what you want to do, it makes sense to use OpenGL.
But that kind of defeats the point of having software rendering if i have to use OpenGL...
The graphics card is going to do the work of compositing your pixels on the screen whether you like it or not so you don't get any particular portability advantages by avoiding OpenGL. Back in the 90s you could just get a pointer to the framebuffer and push pixels there, but those days are gone.
LibSDL is nice because it gives you an API which lets you push pixels to a buffer, and then LibSDL takes care of putting the buffer on screen.
SetPixel() is horribly slow anyway, so you should be using LibSDL on Windows too.

Displaying CUDA-processed images in WPF

I have a WPF application that acquires images from a camera, processes these images, and displays them. The processing part has become burdensome for the CPU, so I've looked at moving this processing to the GPU and running custom CUDA kernels against them. The basic process is as follows:
1) acquire image from camera
2) load image onto GPU
3) call CUDA kernel to process image
4) display processed image
A WPF-to-CUDA-to-Display Control strategy is what I'm trying to figure out.
It seems natural that once the image is loaded onto the GPU that it would not have to be unloaded in order to be displayed. I've read that this can be done with OpenGL, but do I really need to learn OpenGL and include it in my project in order to do a fast display of a CUDA-processed image?
I understand (I think) the issues of calling CUDA kernels from C#. My plan is to either build an unmanaged library around my CUDA calls, which I later wrap for C# -- OR -- try to decide on which one of the managed wrappers (managedCUDA, Cudafy, etc.) to try. I worry about using one of the prebuilt wrappers because they all appear to be lightly supported...but maybe I have the wrong impression.
Anyway, I'm feeling a bit overwhelmed after days of researching the possible options. Any advice would be greatly appreciated.
The process of taking a result of CUDA computation and using it directly on the device for a graphics activity is called "interop". There is OpenGL "interop" and there is DirectX "interop". There are plenty of CUDA sample codes demonstrating how to interact with computed images.
To go directly from computed data on the device, to display, without a trip to the host, you will need to use one of these 2 APIs (OpenGL or DirectX).
You mentioned two of the managed interfaces I've heard of, so it seems like you're aware of the options there.
If the processing time is significant compared to (much larger than) the time taken to transfer the image from host to device, you might consider starting out by just transferring the image from host to device, processing it, and then transferring it back, where you can then use the same plumbing you have been using to display it. You can then decide if the additional effort for interop is worth it.
If you can profile your code to figure out how long the image processing takes on the host, and then prototype something on the device to find out how much faster it is, that will be instructive.
You may find that the processing time is so long you can even benefit from the double-copy arrangement. Or you may find the processing time is so short on the host (compared to just the cost to transfer to the device) that the CUDA acceleration would not be useful.
WPF has a control named D3DImage to directly show DirectX content on screen and in the managedCuda samples package you can find a version of the original fluids sample from Cuda Toolkit using it (together with SlimDX). You don’t have to use managedCuda to realize Cuda in C#, but you can take it to see how things can be realized: managedCuda samples

Star/Point based Image Registration

I am developing an application that stacks multiple frames captured from a CCD camera. The frames are meant to be "aligned" or registered before stacking them. The initial aim is to ask the user for the relevant control points and then figure out if the frames need rotation and/or translation. Eventually, perhaps in the next version, I'd like to be able to detect the stars and cross-reference them in all the frames automatically.
My question is, is there a library that I can employ to register these images i.e. translate and/or rotate? I am using Xcode on Lion and would really prefer a library meant for Cocoa but anything written in C would be fine as well.
A tracking library such as libmv might work for this, it sounds like a tracking application.

Is there a non-deprecated raster graphics framework for Mac OS X?

I am looking for a raster graphics framework for Mac OS X. Specifically, I want some kind of view that I can manipulate (at least conceptually) like a matrix of pixels. My program will generate the pixel data programmatically.
QuickDraw fits that description nicely, but is deprecated. As far as I can tell, there is nothing equivalent in Core Graphics. Am I missing something?
A plain C framework would be preferable to an Objective-C one, but I'm not too fussy.
QD was deprecated because there is no way to do implement it efficiently with the the current generation of fully composited UIs and GPU HW. For that reason there is nothing quite like QD on the system, and there won't be. Allowing direct access to the backing store forces at best forces a lot more bus transactions to and from the GPU, and at worst may prevent a texture from being loaded on to the card itself, and some cases may cause software fallbacks.
It is clear there are sometimes reasons people need pixel level access to a backing store, so there are some mechanisms to do it, but there are no real convenience methods and if you can find some way to avoid it you should. If you can't avoid it you can use CoreGraphics to create a bitmap context using CGBitmapContextCreate where you have access to the backing store and can manipulate the backing store directly. It is not simple to work with, and it is slow.
What about dividing the width and height of the view by itself, then draw width x height squares? You could just use an NSPoint and increase it by one until it hits width x height.
The Simple Directmedia Layer has pixel access. It may be over kill as it is a porting library, but the entire API is in plain C. I do not know what it uses as an underlying MacOS API uses. Best to check the website to see if it is suitable for your purposes.
Alternatively, you could use OpenGL textures.
The best way to do this is Core Image. It's designed for working with pixels, and it's very fast because it lets you do the work on the graphics card.

Resources