How to display streaming images in OpenCV? - c

I'd like to implement the live view function using EDSDK. I have used EdsGetPointer to get the pointer of the memory address for memory streaming. Now I want to display the streaming image on the PC.
I have read in some people use the API on VisualC such as ATL or CImage which able display the streaming image just by passing the pointer of the memory stream as the parameter, and the function could retrieve the streaming images by itself. I am thinking of using OpenCV in order to display the streaming images as I don't have VisualC installed on my computer. Is there any function on OpenCV that I can use to display streaming images? Or is there any other alternatives that I can use to deal with streaming images from EDSDK?

You can pack the data into an IplImage and show it using cvShowImage in a loop: http://opencv.willowgarage.com/documentation/user_interface.html The down side is you're tied into the OpenCV event loop.
There are alternatives. In the past I've used OpenGL to paint an image as a texture so that I could manage the viewport, draw on top of it, etc. You can get a simple and flexible working GUI pretty quickly using GLUT. A benefit to that is that whatever OpenGL code you write will be portable to any other UI library you use as long as that library has an OpenGL canvas widget. What I always do is Camera->IplImage->OpenGL Texture->wxWidgets glCanvas. I still use OpenCV for the actual image processing, etc. It's totally cross-platform and doesn't require the pay version of VC++.

are you want it for LiveView ? if not for liveview, you can save your streaming image in host
using
Error = EdsCreateFileStream(dirItemInfo.szFileName, EDSDK.EdsFileCreateDisposition.CreateAlways, EDSDK.EdsAccess.ReadWrite, out stream);
then you can load it
IplImage *inImg = cvLoadImage("photo2.jpg");
and then can process the image in opencv.

Related

CUDA-C importing bitmap image

Now, I have my simple project on changing the color image into white&black image using CUDA-C.
But I got a problem with importing/loading a bitmap image into program. I don't know how to import it.
So...
CUDA-C have a specific function about importing/loading bitmap image?
If yes, what is it and how to use it?
If no, how do you do with importing/loading bitmap image?
Thank you.
There's really nothing that is CUDA-specific about loading a bitmap image into an application.
If you have a preferred method for loading a bitmap image into an application, you should be able to use it with a CUDA app. You will obviously be loading the image into the host application space first. After that, if you want to transfer it to the device, you can use any of the standard methods for transferring data to the device to accomplish this.
CUDA (i.e. the runtime API) doesn't have any specific functions for importing/loading a bitmap image
-
There are many ways to load an image. If you are already using OpenGL or DirectX, then you will want to use a method associated with one of those APIs, and then use the appropriate interop API within CUDA to manipulate the object.
If you want to import a bitmap image directly into a CUDA program without using a graphics API, take a look at the CUDA samples, as a number of them do this and provide helper functions that you may want to re-use.
For example, the dct8x8 sample provides a file called BmpUtil.cpp which contains a number of useful bitmap import/handling routines, and the dct8x8 app (dct8x8.cu) shows how these may be used directly in a CUDA app.

How to stream contents of a WPF form (as frames) to an AVI File?

I currently send a live video mix to a output screen (a form on a particular screen). Consider it like a really advanced version of PowerPoint. I call it a video control room for the pc. I want to take 30 frames a second from a screen (of my choice, I allow multiple screens) and the audio from the computer (stereo) set, save it to a hard disks. How do I do that?
I know I can draw the image of the interface using the RenderTargetBitmap class, but How do I put those images (as frames) in an AVI file or push it to a video server? An SDK? or a Code Example to point me in the right direction, would be nice! I also want to capture the sound of the current Stereo mix, or microphone (as determined by the user).
I don't want to use a third-party and I'd prefer doing it in the program to take maximum control over it. I'm ok, with using a second program to do compression and just saving a raw AVI file (with audio stream). Disk is cheap, as any programmer would say. If I have to, I'll save the video and audio streams separately, but I'd prefer not to.
Let me know.
Check out the RenderTargetBitmap class. it allows you to turn any visual into pixels that you can then pass along to an encoder/network stack, like the ones described here
also check out Windows media foundation for turning your pixels into an avi or stream

Produce video from OpenGL C program

I have a C program that runs a scientific simulation and displays a visualisation in an OpenGL window. I want to make this visualisation into a video, which will eventually go on YouTube.
Question: What's the best way to make a video from a C / OpenGL program?
The way I've done it in the past is to use a screen capture program, but this is very labour-intensive (have to start/stop the screen capture program, save the video file, etc...). It seems like there should be a way to automate the process of making a video from within the C program. Then I can leave it running overnight and have 20 videos to look through in the morning, and choose the best one to put on YouTube.
YouTube recommend "MPEG4 (Divx, Xvid) format at 640x480 resolution".
I'm using GLUT 3.7.6_3, if that makes a difference. I can change windowing system if there's a good reason.
I'm running Windows (XP), so would prefer answers that work on Windows, but Linux answers are ok too. I can use Linux if it's not possible to do the video stuff easily on Windows. I have a friend who makes a .png image for each frame of the video and then stitches them together using "mencoder" on Linux.
you can use the glReadPixels function (see example)
But if the stuff you are trying to display is made of simple objects (i.e. spheres, rods, etc..), I would "export" each frame into a POV-ray files, render these, and then make a video out of these pictures. you will reach a much higher quality like that.
Use a 3rd party application like FRAPS to do the job for you.
Fraps can capture audio and video up
to 2560x1600 with custom frame rates
from 1 to 120 frames per second!
All movies are recorded in outstanding
quality.
They have video samples on the site. They seem good.
EDIT:
You could execute a tool to record the screen from your C application by calling it like system("C:\screen_recorder_app.exe -params"). Check camstudio, it has a command line version.

3DS file loader for opengl

this is my 1st question in the site.
I need a 3DS model loader for opengl applications. Loader should also be able to load .jpg textures. I tried to use OpenSceneGraph for this purpose but this time I have to also use the whole OpenSceneGraph data structure to render the scene. Is it possible to use OpenSceneGraph only for model loading and do the rest with standart opengl code, especially glTranslate, glRotate, etc.
Googling turned up this: lib3ds
Not sure if it can read JPEGs but that should be easy enough with libjpeg or equivalent.
OpenSceneGraph uses "plugins" to load file formats - both models and textures. There are working plugins for 3ds and for jpeg, though at least the jpeg one (I believe) isn't built in the default configuration - when creating the OpenSceneGraph makefiles (or projects on Windows), you need to specify the location of the libjpeg files in order for it to be built (as the plugin is based on that library). Once you have these two plugins, you'll have no problem reading 3ds files and jpeg textures. Another option is to use some other convertor which supports both osg (or ive) - OpenSceneGraph's native format- and 3ds. Blender comes to mind, and it's free...
As for mixing openGL calls with OpenSceneGraph - that can be tricky, but possible. One option is to derive your own class from Drawable, then override its draw implementation method, and place it anywhere you want in the graph, though manually drawing the 3ds files defeats the whole purpose of using a scene-graph...

Google wave: PDF-generation (pdfjet)

PDFjet says it supports App Engine, which by extension means it will run on Wave. e question is how can I get to to work on the Google WavE?
The goal is to get a PDF-button in the wave which is able to output the whole wave into PDF
Any assistance would be greatly appreciated.
You'll likely need to do the following:
Write a wave robot that simply joins a wave so it's capable of copying its contents.
Write a wave gadget that adds an 'As PDF' button to a wave. When clicked, it should make a call to your bot directly, returning the generated PDF.
Write an extension installer that installs the robot and gadget in a wave.
Implement code to render a wave to a PDF using PDFJet. Since PDFJet doesn't render HTML to PDFs directly, you'll either need to implement your own renderer, or use another library.

Resources