OpenCV - Error in Displaying Webcam Video - c

I'm pretty new to OpenCV and I'm trying to get my bearings by looking at, and running, sample code.
One of the sample programs that I was looking at is a program for displaying webcam video. Here are the important lines (the program doesn't execute farther than this):
// Make frame.
CvCapture* capture = cvCaptureFromCAM(0);
if(!capture) {
printf("Webcam not initialized....");
}
// Display video in frame.
Unfortunately, the if statement always executes. For some reason, capture is not initialized.
Even stranger, when I run the program, it even gives me a GUI to select the webcam that I want to use:
However, even after I select the webcam, capture is not initialized!
What does this mean? How do I fix this?
Thanks for any suggestions.

It is possible that OpenCV cannot access the webcam until after you select it. In that case, try looping until the webcam is available:
CvCapture *capture = NULL;
do {
// you could also try passing in CV_CAP_ANY or -1 instead of 0
capture = cvCaptureFromCAM(0);
} while (!capture);
If this still doesn't work, call cvErrorStr(cvGetErrStatus()) to get a string explaining the error.

Related

From iOS Objective-C code and Android Java code to a Codename One PeerComponent

At the page https://www.wowza.com/docs/how-to-build-a-basic-app-with-gocoder-sdk-for-ios there are the following examples:
if (self.goCoder != nil) {
// Associate the U/I view with the SDK camera preview
self.goCoder.cameraView = self.view;
// Start the camera preview
[self.goCoder.cameraPreview startPreview];
}
// Start streaming
[self.goCoder startStreaming:self];
// Stop the broadcast that is currently running
[self.goCoder endStreaming:self];
The equivalent Java code for Android is reported at the page https://www.wowza.com/docs/how-to-build-a-basic-app-with-gocoder-sdk-for-android#start-the-camera-preview, it is:
// Associate the WOWZCameraView defined in the U/I layout with the corresponding class member
goCoderCameraView = (WOWZCameraView) findViewById(R.id.camera_preview);
// Start the camera preview display
if (mPermissionsGranted && goCoderCameraView != null) {
if (goCoderCameraView.isPreviewPaused())
goCoderCameraView.onResume();
else
goCoderCameraView.startPreview();
}
// Start streaming
goCoderBroadcaster.startBroadcast(goCoderBroadcastConfig, this);
// Stop the broadcast that is currently running
goCoderBroadcaster.endBroadcast(this);
The code is self-explaining: the first blocks start a camera preview, the second blocks start a streaming and the third blocks stop it. I want the preview and the streaming inside a Codename One PeerComponent, but I didn't remember / understand how I have to modify both these native code examples to return a PeerComponent to the native interface.
(I tried to read again the developer guide but I'm a bit confused on this point).
Thank you
This is the key line in the iOS instructions:
self.goCoder.cameraView = self.view;
Here you define the view that you need to return to the peer and that we can place. You need to change it from self.view to a view object you create. I think you can just allocate a UIView and assign/return that.
For the Android code instead of using the XML code they use there you can use the WOWZCameraView directly and return that as far as I can tell.

How can I access a graphics card's output directly?

Do graphics cards typically write their output to some place in memory which I can access? Do I have to use the driver? If so, can I use OpenGL?
I want to know if it's possible to "capture" the output of a VM on Linux which has direct access to a GPU, and is running Windows. Ideally, I could access the output from memory directly, without touching the GPU, since this code would be able to run on the Linux host.
The other option is to maybe write a Windows driver which reads the output of the GPU and writes it to some place in memory. Then, on the Linux side, a program can read this memory. This seems somewhat impossible, since I'm not really sure how to get a process on the host to share memory with a process on the guest.
Is it possible to do option 1 and simply read the output from memory?
I do not code under Linux but in Windows (you are running it in emulator anyway) you can use WinAPI to directly access canvas of any window or even desktop from 3th party App. Some GPU overlays could be problematic to catch (especially DirectX based) but I had no problems with mine GL/GLSL for now.
If you got access to App source you can use glReadPixels for image extraction from GL directly (but that works only for current GL based rendering).
Using glReadPixels
As mentioned this must be implemented directly in the targeted app so you need to have it source code or inject your code in the right place/time. I use for screenshoting this code:
void OpenGLscreen::screenshot(Graphics::TBitmap *bmp)
{
if (bmp==NULL) return;
int *dat=new int[xs*ys],x,y,a,*p;
if (dat==NULL) return;
bmp->HandleType=bmDIB;
bmp->PixelFormat=pf32bit;
if ((bmp->Width!=xs)||(bmp->Height!=ys)) bmp->SetSize(xs,ys);
if ((bmp->Width==xs)&&(bmp->Height==ys))
{
glReadPixels(0,0,xs,ys,GL_BGRA,GL_UNSIGNED_BYTE,dat);
glFinish();
for (a=0,y=ys-1;y>=0;y--)
for (p=(int*)bmp->ScanLine[y],x=0;x<xs;x++,a++)
p[x]=dat[a];
}
delete[] dat;
}
where xs,ys is OpenGL window resolution, you can ignore the whole bmp stuff (it is VCL bitmap I use to store screenshot) and also can ignore the for it just copy the image from buffer to bitmap. So the important stuff is just this:
int *dat=new int[xs*ys]; // each pixel is 32bit int
glReadPixels(0,0,xs,ys,GL_BGRA,GL_UNSIGNED_BYTE,dat);
glFinish();
You need to execute this code after the rendering is done otherwise you will obtain unfinished or empty buffer. I use it after redraw/repaint events. As mentioned before this will obtain only the GL rendered stuff so if your App combines GDI+OpenGL it is better to use the next approach.
WinAPI approach
To obtain Canvas image of any window I wrote this class:
//---------------------------------------------------------------------------
//--- screen capture ver: 1.00 ----------------------------------------------
//---------------------------------------------------------------------------
class scrcap
{
public:
HWND hnd,hnda;
TCanvas *scr;
Graphics::TBitmap *bmp;
int x0,y0,xs,ys;
scrcap()
{
hnd=NULL;
hnda=NULL;
scr=new TCanvas();
bmp=new Graphics::TBitmap;
#ifdef _mmap_h
mmap_new('scrc',scr,sizeof(TCanvas() ));
mmap_new('scrc',bmp,sizeof(Graphics::TBitmap));
#endif
if (bmp)
{
bmp->HandleType=bmDIB;
bmp->PixelFormat=pf32bit;
}
x0=0; y0=0; xs=1; ys=1;
hnd=GetDesktopWindow();
}
~scrcap()
{
#ifdef _mmap_h
mmap_del('scrc',scr);
mmap_del('scrc',bmp);
#endif
if (scr) delete scr; scr=NULL;
if (bmp) delete bmp; bmp=NULL;
}
void init(HWND _hnd=NULL)
{
RECT r;
if (scr==NULL) return;
if (bmp==NULL) return;
bmp->SetSize(1,1);
if (!IsWindow(_hnd)) _hnd=hnd;
scr->Handle=GetDC(_hnd);
hnda=_hnd;
resize();
}
void resize()
{
if (!IsWindow(hnda)) return;
RECT r;
// GetWindowRect(hnda,&r);
GetClientRect(hnda,&r);
x0=r.left; xs=r.right-x0;
y0=r.top; ys=r.bottom-y0;
bmp->SetSize(xs,ys);
xs=bmp->Width;
ys=bmp->Height;
}
void capture()
{
if (scr==NULL) return;
if (bmp==NULL) return;
bmp->Canvas->CopyRect(Rect(0,0,xs,ys),scr,TRect(x0,y0,x0+xs,y0+ys));
}
};
//---------------------------------------------------------------------------
//---------------------------------------------------------------------------
//---------------------------------------------------------------------------
Again it uses VCL so rewrite bitmap bmp and canvas scr to your programing environment style. Also igneore the _mmap_h chunks of code they are just for debugging/tracing of memory pointers related to some nasty compiler bug I was facing at that time I wrote this.
The usage is simple:
// globals
scrcap cap;
// init
cap.init();
// on screenshot
cap.capture();
// here use cap.bmp
If you call cap.init() it will lock on the whole windows desktop. If you call cap.init(window_handle) it will lock on specific visual window/component instead. To obtain window handle from 3th app side see:
is ther a way an app can display a message without the use of messagebox API?
Sorry it is on SE/RE instead of here on SE/SO but My answer here covering this topic was deleted. I use this for video capture ... all of the animated GIFs in my answers where created by this code. Another example could be seen on the bottom of this answer of mine:
Image to ASCII art conversion
As you can see it works also for DirectX overlay with media player classic (even Windows PrintScreen function cant do it right). As I wrote I got no problems with this yet.
Beware visual stuff WinAPI calls MUST BE CALLED FROM APP MAIN THREAD (WNDPROC) otherwise serious problems could occur leading to random unrelated WinAPI calls exceptions anywhere in the App.

GTK Application doesn't refresh UI when using OpenCV

I have a simple GUI application I wrote in C for the RaspBerry PI while using GTK+2.0 to handle the actual UI rendering. The application so far is pretty simple, with just a few pushbuttons for testing simple functions I wrote. One button causes a thread to be woken up which prints text to the console, and goes back to sleep, while another button stops this operation early by locking a mutex, changing a status variable, then unlocking the mutex again. Fairly simple stuff so far. The point of using this threaded approach is so that I don't ever "lock up" the UI during a long function call, forcing the user to be blocked on the I/O operations completing before the UI is usable again.
If I call the following function in my thread's processing loop, I encounter a number of issues.
#include <opencv2/objdetect/objdetect.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <stdio.h>
#include <errno.h>
using namespace std;
using namespace cv;
#define PROJECT_NAME "CAMERA_MODULE" // Include before liblog
#include <log.h>
int cameraAcquireImage(char* pathToImage) {
if (!pathToImage) {
logError("Invalid input");
return (-EINVAL);
}
int iErr = 0;
CvCapture *capture = NULL;
IplImage *frame, *img;
//0=default, -1=any camera, 1..99=your camera
capture = cvCaptureFromCAM(CV_CAP_ANY);
if(!capture) {
logError("No camera interface detected");
iErr = (-EIO);
}
if (!iErr) {
if ((frame = cvQueryFrame(capture)) == NULL) {
logError("ERROR: frame is null...");
iErr = (-EIO);
}
}
if (!iErr) {
CvSize size = cvSize(100, 100);
if ((img = cvCreateImage(size, IPL_DEPTH_16S, 1)) != NULL) {
img = frame;
cvSaveImage(pathToImage, img);
}
}
if (capture) {
cvReleaseCapture(&capture);
}
return 0;
}
The function uses some simple OpenCV code to take a snapshot with a webcam connected to my Raspberry PI. It issues warnings of VIDIOC_QUERYMENU: Invalid argument to the console, but still manages to acquire the images and save them to a file for me. However, my GUI becomes sluggish, and sometimes hangs. If it doesn't outright hang, then the window goes blank, and I have to randomly click all over the UI area until I click on where a pushbutton would normally be located, and the UI finally re-renders again rather than showing a white empty layout.
How do I go about resolving this? Is this some quirk in OpenCv when using it as part of a Gtk+2.0 application? I had originally had my project setup as a GTK3.0 application, but it wouldn't run due to some check in GTK preventing multiple versions from being included in a single application, and it seems OpenCv is an extension of GTK+2.0.
Thank you.
there is something quite broken here:
CvSize size = cvSize(100, 100);
if ((img = cvCreateImage(size, IPL_DEPTH_16S, 1)) != NULL) {
img = frame;
cvSaveImage(pathToImage, img);
}
first, you create a useless 16-bit image (why even?), then you reassign(alias) that pointer to your original image, and then you don't cvReleaseImage it (memleak).
please, stop using opencv's deprecated c-api. please.
any noob will shoot into his foot using this (one of the main reasons to get rid of it)
also, you can only use ~30% of opencv's functionality this way (the opencv1.0 set)
again, please, stop using opencv's deprecated c-api. please.
Didn't you forget to free the img pointer ?
Also, I did in the past an app that stored uncompressed images on the disk, and things used to become sluggish. In fact, what was taking time was storing the images on the disk, as it was exceeding the max bandwidth of what the filesystem layer could handle.
So try to see is you can store compressed images instead (trading some CPU to save bandwidth), or store your images in RAM in a queue and save them afterwards (in a separate thread, or in an idle handler). Of course, if the video you capture is too long, you may end up with an Out Of Memory condition. I only had sequences of a few seconds to store, so that did the trick.

How to get rid of the "click" sound at end of translation in Microsoft Translate API in Silverlight app

I am using the MS Translator to send back a WAV file of text to enable "talking" in my Silverlight 4 app.
However, at the end of every translation, there is a wierd click noise (it sounds like someone is turning a microphone on or off).
Here is an online Silverlight app which demonstrates the issue. Type something in and translate it (can be the same language) and listen to the end of the talking.
Is there anything I can do to get rid of this noise? I was thinking of reading the WAV file to 90% and then stopping it before the sound but I would like to understand technically why it's coming back with the noise and where the problem is so I can find the best solution for it.
UPDATE: After Brad's useful lead below it seems that the problem is in the WaveMediaStreamSource that converts the returned WAV into a format that Silverlight can use.
This is the same one mentioned/used in the online project here.
So... any idea how to get rid of the crackling sound when WaveMediaStreamSource converts it?
I'm not sure if this is helpful, but it seems that the click noise isn't in the WAV file itself.
I used Fiddler to dig out the response coming back from the server and saved that to a WAV file. Opening that in Audacity, I can clearly hear the translation to the end... no click.
So, that click might be the normal sound of the component in Silverlight stopping. However, I have a different theory.
The WAV file itself is sampled at 8kHz. That's kind of an oddball. I bet the click noise is an artifact of the sound card or software (silverlight/audio driver/windows itself, etc.) up-sampling to a more appropriate rate. This is testable. Try making a WAV file and use Fiddler or some other HTTP proxy tool to return your WAV instead of what was requested from the Microsoft server. See if your WAV (at 44.1kHz for example) has the same issue.
I had this same problem. I'm sure everyone does. Here is how I solved it.
Basically I added a marker to the stream that performed the callback 1 second before the audio was finished playing. That callback then fires a timer that is called 700ms later and that method stops the audio (about 300ms) before it's completed. I tried it without this timer and it was inconsistent. It seems like if you set a marker to fire 300ms before the end it wont always fire. Better to get it one second before and then run your own timer to cut it off when you are ready.
Here is the relevant code snippets. Hope it can help someone else. There are loads of other tricks to writing a smooth running audio player but this code should solve at least your clicking issue.
public void Page_Loaded(object sender, EventArgs args)
{
mediaElement.MediaOpened += new RoutedEventHandler(mediaElement_MediaOpened);
mediaElement.MarkerReached += new TimelineMarkerRoutedEventHandler(mediaElement_MarkerReached);
}
Timer t;
void mediaElement_MarkerReached(object sender, TimelineMarkerRoutedEventArgs e)
{
// almost completed playing the file so lets stop before the annoying click is heard
t = new Timer(handleStopTimerDone, "", 700, 0);
}
public void handleStopTimerDone(object state)
{
// stop the audio playing
Stop();
}
private void mediaElement_MediaOpened(object sender, RoutedEventArgs e)
{
TimeSpan duration = mediaElement.NaturalDuration.TimeSpan;
TimelineMarker newMarker = new TimelineMarker();
newMarker.Time = new TimeSpan(duration.Ticks - 10000000);
while (mediaElement.Markers.Count > 0)
{
mediaElement.Markers.RemoveAt(0);
}
mediaElement.Markers.Add(newMarker);
}
public void Stop()
{
this.Dispatcher.BeginInvoke(delegate()
{
mediaElement.AutoPlay = false;
mediaElement.Stop();
mediaElement.Position = TimeSpan.FromSeconds(0);
if (memData != null)
{
WaveMediaStreamSource wavMss = new WaveMediaStreamSource(memData);
mediaElement.SetSource(wavMss);
}
});
}

How do I get the selected text from the focused window using native Win32 API?

My app. will be running on the system try monitoring for a hotkey; when the user selects some text in any window and presses a hotkey, how do I obtain the selected text, when I get the WM_HOTKEY message?
To capture the text on to the clipboard, I tried sending Ctrl + C using keybd_event() and SendInput() to the active window (GetActiveWindow()) and forground window (GetForegroundWindow()); tried combinations amongst these; all in vain. Can I get the selected text of the focused window in Windows with plain Win32 system APIs?
TL;DR: Yes, there is a way to do this using plain win32 system APIs, but it's difficult to implement correctly.
WM_COPY and WM_GETTEXT may work, but not in all cases. They depend on the receiving window handling the request correctly - and in many cases it will not. Let me run through one possible way of doing this. It may not be as simple as you were hoping, but what is in the adventure filled world of win32 programming? Ready? Ok. Let's go.
First we need to get the HWND id of the target window. There are many ways of doing this. One such approach is the one you mentioned above: get the foreground window and then the window with focus, etc. However, there is one huge gotcha that many people forget. After you get the foreground window you must AttachThreadInput to get the window with focus. Otherwise GetFocus() will simply return NULL.
There is a much easier way. Simply (miss)use the GUITREADINFO functions. It's much safer, as it avoids all the hidden dangers associated with attaching your input thread with another program.
LPGUITHREADINFO lpgui = NULL;
HWND target_window = NULL;
if( GetGUIThreadInfo( NULL, lpgui ) )
target_window = lpgui->hwndFocus;
else
{
// You can get more information on why the function failed by calling
// the win32 function, GetLastError().
}
Sending the keystrokes to copy the text is a bit more involved...
We're going to use SendInput instead of keybd_event because it's faster, and, most importantly, cannot be messed up by concurrent user input, or other programs simulating keystrokes.
This does mean that the program will be required to run on Windows XP or later, though, so, sorry if your running 98!
// We're sending two keys CONTROL and 'V'. Since keydown and keyup are two
// seperate messages, we multiply that number by two.
int key_count = 4;
INPUT* input = new INPUT[key_count];
for( int i = 0; i < key_count; i++ )
{
input[i].dwFlags = 0;
input[i].type = INPUT_KEYBOARD;
}
input[0].wVK = VK_CONTROL;
input[0].wScan = MapVirtualKey( VK_CONTROL, MAPVK_VK_TO_VSC );
input[1].wVK = 0x56 // Virtual key code for 'v'
input[1].wScan = MapVirtualKey( 0x56, MAPVK_VK_TO_VSC );
input[2].dwFlags = KEYEVENTF_KEYUP;
input[2].wVK = input[0].wVK;
input[2].wScan = input[0].wScan;
input[3].dwFlags = KEYEVENTF_KEYUP;
input[3].wVK = input[1].wVK;
input[3].wScan = input[1].wScan;
if( !SendInput( key_count, (LPINPUT)input, sizeof(INPUT) ) )
{
// You can get more information on why this function failed by calling
// the win32 function, GetLastError().
}
There. That wasn't so bad, was it?
Now we just have to take a peek at what's in the clipboard. This isn't as simple as you would first think. The "clipboard" can actually hold multiple representations of the same thing. The application that is active when you copy to the clipboard has control over what exactly to place in the clipboard.
When you copy text from Microsoft Office, for example, it places RTF data into the clipboard, alongside a plain-text representation of the same text. That way you can paste it into wordpad and notepad. Wordpad would use the rich-text format, while notepad would use the plain-text format.
For this simple example, though, let's assume we're only interested in plaintext.
if( OpenClipboard(NULL) )
{
// Optionally you may want to change CF_TEXT below to CF_UNICODE.
// Play around with it, and check out all the standard formats at:
// http://msdn.microsoft.com/en-us/library/ms649013(VS.85).aspx
HGLOBAL hglb = GetClipboardData( CF_TEXT );
LPSTR lpstr = GlobalLock(hglb);
// Copy lpstr, then do whatever you want with the copy.
GlobalUnlock(hglb);
CloseClipboard();
}
else
{
// You know the drill by now. Check GetLastError() to find out what
// went wrong. :)
}
And there you have it! Just make sure you copy lpstr to some variable you want to use, don't use lpstr directly, since we have to cede control of the contents of the clipboard before we close it.
Win32 programming can be quite daunting at first, but after a while... it's still daunting.
Cheers!
Try adding a Sleep() after each SendInput(). Some apps just aren't that fast in catching keyboard input.
Try SendMessage(WM_COPY, etc. ).

Resources