OpenCV show both incoming video and modified video in separate windows - c

This should be easy. I have a video stream coming in from my webcam. I'm just playing with image transformation etc. I'd like to be able to view the original images (video input) in one window and the transformed video in another. Problem is, as soon as I start capturing video instead of just single images, the original video window displays transformed video. I don't understand why.
cvNamedWindow("in", CV_WINDOW_AUTOSIZE);
cvNamedWindow("out", CV_WINDOW_AUTOSIZE);
CvCapture *fc = cvCaptureFromCAM(0);
IplImage* frame = cvQueryFrame(fc);
if (!frame) {
return 0;
}
IplImage* greyscale = cvCreateImage(cvGetSize(frame), IPL_DEPTH_8U, 1);
IplImage* output = cvCreateImage(cvGetSize(frame),IPL_DEPTH_32F , 1);
while(1){
frame= cvQueryFrame(fc);
cvShowImage("in", frame);
// manually convert to greyscale
for (int y = 0; y < frame->height; y++) {
uchar* p = (uchar*) frame->imageData + y* frame->widthStep; // pointer to row
uchar* gp = (uchar*) greyscale->imageData + y*greyscale->widthStep;
for(int x = 0; x < frame->width; x++){
gp[x] = (p[3*x] + p[3*x+1] + p[3*x+2])/3; // average RGB values
}
}
cvShowImage("out", greyscale);
char c = cvWaitKey(33);
if (c == 27) {
return 0;
}
}
In this simple example, both video streams end up appearing greyscale... The pointer values and imagedata for frame and greyscale are totally different. If I stop showing greyscale in the "out" window, then frame will appear in color.
Also, if I continue and apply a Sobel operation on the greyscale image and display the result in "out", both "in" and "out" windows will show the Sobel image!
Any ideas?

Hmm This was weird, but it seems using CV_WINDOW_AUTOSIZE was the problem? Perhaps it's not supported in OpenCV 2.1 (which I'm pretty sure is what I'm running). Anyways, using 0 instead of CV_WINDOW_AUTOSIZE when creating the windows works fine.

I have tried your code with openCV 2.0 under mandriva 2010 and it is working fine either with CV_WINDOW_AUTOSIZE or 0.
You may try to convert to grayscale with cvCvtColor(frame,grayscale,CV_RGB2GRAY) and see if the problem persist.

Related

OpenCV canny; output image is pure gray

I am learning opencv and reading a book and following examples. The book introduced the canny filter. However there is some problem with my output. As an input image I have given a 512x512 gray scale image but the filter output is pure gray image. Here is the image:
This is the input image.
And this is the output image.
And here is the snippets:
#include <opencv\cv.h>
#include <opencv2\highgui\highgui.hpp>
#include "Resources.h"
IplImage* doCanny(
IplImage* in,
double lowThresh,
double highThresh,
double aperture
) {
if (in->nChannels != 1)
{
return 0; // Canny only handle gray scale images.
}
IplImage* out = cvCreateImage(
CvSize(cvGetSize(in)),
IPL_DEPTH_8U,
1
);
cvCanny(in, out, lowThresh, highThresh, aperture);
return out;
}
int main(int argc, char** argv)
{
IplImage* image = cvLoadImage(IMAGE_FRUIT);
IplImage* output = doCanny(image, 200, 201, 1);
cvNamedWindow("Canny", CV_WINDOW_AUTOSIZE);
cvShowImage("Canny", output);
cvWaitKey(0);
cvReleaseImage(&output);
cvDestroyWindow("Canny");
return 0;
}
Visual Studio 2015, OpenCV version 2.4.13
I think if you step through your code, you will realize the cvCanny function never gets triggered, the returned output from doCanny is a null pointer.
OpenCV's Canny edge detection algorithm only accepts gray scale image, which is why the original code has the "if (in->nChannels != 1)" check, so you need to convert your input image into a grayscale image first.
// Convert to grayscale first
IplImage* gray_image = cvCreateImage(cvGetSize(image), IPL_DEPTH_8U, 1);
cvCvtColor(image, gray_image, CV_BGR2GRAY);
// Perform Canny
IplImage* output = doCanny(gray_image, 200, 201, 3);
Additional, I think your "aperture" parameter for cvCanny is also invalid, try to use the default value 3 (or 5, 7), and you should be able to see the result.
I would also recommend using the C++ interface instead of the deprecated C interface.

Xlib: Showing the webcam using PIxmap and XDrawPoint is too slow... how can I improve?

I want to open the webcam and show the video with Xlib.
So, I open the webcam, get the image, and I do something like:
for(x = 0; x < webcamX; x++){
for(y = 0; y < webcamY; y++){
pixel = GetCameraPixel(x, y);
XDrawPoint(dpy, pixmap, pixel, x, y);
}
}
XCopyArea(dpy, pixmal, window, ....);
but, calling XDrawPoint is too slow.
I tried also with XImage, but this time XPutPixel and XPutImage are both slows (using Pixmap is faster than XImage)
I think the problem is that doing XDrawPoint 640 * 480 times makes a lot of requests to the X server. The same when doing XPutImage, sending a 640 * 480 image is a lot.. isn't it?
So, forgetting about the webcam, imagine I want to do a game, an animation, a video player, or whatever, only with XLib.. is it possible? I am sure that drawing pixel by pixel is not efficient, so how can I do it well?
All the information I found on the internet is to use Pixmap or XImage, it is better than drawing on the window itself, but it's not enought.
For example, the xscreensavers show a very nice animation in full screen (and I think they only use Xlib), how can they make the animation smoothly?
Thanks.
Well, it was my fault I think.
Using XImage is the best way, it is very fast and smooth when drawing pixel by pixel something like the webcam example.
But, I was creating the XImage using the XYPixmap, and it was too slow, I changed to ZPixmap and everything is perfect!
So, the solution I used for the webcam is:
image = XCreateImage(dpy, DefaultVisual(dpy, DefaultScreen(dpy)), DefaultDepth(dpy, DefaultScreen(dpy)), ZPixmap, 0, imagedata, webcamX, webcamY, 32, 0);
/* ....*/
for(x = 0; x < webcamX; x++){
for(y = 0; y < webcamY; y++){
pixel = GetCameraPixel(x, y);
XPutPixel(image, x, y, pixel);
}
}
XPutImage(dpy, window, gc, image, 0, 0, 0, 0, webcamX, webcamY);
Create an XImage using the ZPixmap format
Draw in that XImage (using XPutPixel in my case)
Copy the XImage to the window (using XPutImage)
I think that using XYPixmap was too slow because X needed to do some kind of conversion thus making the process very slow

Saving BitmapSource as Tiff encoded JPEG using Libtiff.net

I'm trying to write a routine that will save a WPF BitmapSource as a JPEG encoded TIFF using LibTiff.net. Using the examples provided with LibTiff I came up with the following:
private void SaveJpegTiff(BitmapSource source, string filename)
{
if (source.Format != PixelFormats.Rgb24) source = new FormatConvertedBitmap(source, PixelFormats.Rgb24, null, 0);
using (Tiff tiff = Tiff.Open(filename, "w"))
{
tiff.SetField(TiffTag.IMAGEWIDTH, source.PixelWidth);
tiff.SetField(TiffTag.IMAGELENGTH, source.PixelHeight);
tiff.SetField(TiffTag.COMPRESSION, Compression.JPEG);
tiff.SetField(TiffTag.PHOTOMETRIC, Photometric.RGB);
tiff.SetField(TiffTag.ROWSPERSTRIP, source.PixelHeight);
tiff.SetField(TiffTag.XRESOLUTION, source.DpiX);
tiff.SetField(TiffTag.YRESOLUTION, source.DpiY);
tiff.SetField(TiffTag.BITSPERSAMPLE, 8);
tiff.SetField(TiffTag.SAMPLESPERPIXEL, 3);
tiff.SetField(TiffTag.PLANARCONFIG, PlanarConfig.CONTIG);
int stride = source.PixelWidth * ((source.Format.BitsPerPixel + 7) / 8);
byte[] pixels = new byte[source.PixelHeight * stride];
source.CopyPixels(pixels, stride, 0);
for (int i = 0, offset = 0; i < source.PixelHeight; i++)
{
tiff.WriteScanline(pixels, offset, i, 0);
offset += stride;
}
}
MessageBox.Show("Finished");
}
This converts the image and I can see a JPEG image but the colours are messed up. I'm guessing I'm missing a tag or two for the TIFF or something is wrong like the Photometric interpretation but am not entirely clear on what is needed.
Cheers,
It's not clear what do you mean by saying " colours are messed up" but probably you should convert BGR samples of BitmapSource to RGB ones expected by LibTiff.Net.
I mean, make sure the order of color channels is RGB (most probably, it's not) before feeding pixels to WriteScanline method.

Convert image document to black and white with OpenCV

I'm new to OpenCV and image processing and I'M not sure how to solve my problem.
I have a photo of document made in iPhone and I want to convert that document to black and white. I tried to use threshold but the text was not so good (a little blurry and unreadable). I'd like to text looks same as on the original image, only black, and background will be white. What can I do?
P.S. When I made a photo of part of the document, where text is quite big, then result is ok.
I will be grateful for any help.
Here are the example image I use and the result:
My attemp, maybe a little more readable than yours:
IplImage * pRGBImg = 0;
pRGBImg = cvLoadImage(input_file.c_str(), CV_LOAD_IMAGE_UNCHANGED);
if(!pRGBImg)
{
std::cout << "ERROR: Failed to load input image" << std::endl;
return -1;
}
// Allocate the grayscale image
IplImage * pGrayImg = 0;
pGrayImg = cvCreateImage(cvSize(pRGBImg->width, pRGBImg->height), pRGBImg->depth, 1);
// Convert it to grayscale
cvCvtColor(pRGBImg, pGrayImg, CV_RGB2GRAY);
// Dilate
cvDilate(pGrayImg, pGrayImg, 0, 0.2);
cvThreshold(pGrayImg, pGrayImg, 30, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
cvSmooth(pGrayImg, pGrayImg, CV_BLUR, 2, 2);
cvSaveImage("out.png", pGrayImg);
Threshold image is used for different purposes.
If u just want to convert it to b/w image just do this. USing openCV 2.2
cv::Mat image_name = cv::imread("fileName", 0);
the second parameter 0 tells to read a color image as b/w image.
And if you want to save as a b/w image file.
code this
cv::Mat image_name = cv::imread("fileName", 0);
cv::imwrite(image_name, "bw_filename.jpg");
Using Adaptive Gaussian Thresholding is a good idea here. This will also enhance the quality of text written in the image.
You can do that by simply giving the command:
AdaptiveThreshold(src_Mat, dst_Mat, Max_value, Adaptive_Thresholding_Method, Thresholding_type, blocksize, C);

Are 2 simultaneous webcam windows possible with openCV?

I am applying common image transforms to my live webcam capture. I want to display the original webcam in one window and the image with the transforms applied to in another window. However, I am getting same image (filtered) on both windows, I am wondering if I am limited by the OpenCV API or if I am missing something? My code snippet looks like -
/* allocate resources */
cvNamedWindow("Original", CV_WINDOW_AUTOSIZE);
cvNamedWindow("Filtered", CV_WINDOW_AUTOSIZE);
CvCapture* capture = cvCaptureFromCAM(0);
do {
IplImage* img = cvQueryFrame(capture);
cvShowImage("Original", img);
Filters* filters = new Filters(img);
IplImage* dst = filters->doSobel();
cvShowImage("Filtered", dst);
cvWaitKey(10);
} while (1);
/* deallocate resources */
cvDestroyWindow("Original");
cvDestroyWindow("Filtered");
cvReleaseCapture(&capture);
Its possible! Try copying img to another IplImage before sending it to processing and see if that works first.
Yes, I know what you're going to say. But just try that first and see if it does what you want. The code below is just to illustrate what you should do, I don't know if it will work:
/* allocate resources */
cvNamedWindow("Original", CV_WINDOW_AUTOSIZE);
cvNamedWindow("Filtered", CV_WINDOW_AUTOSIZE);
CvCapture* capture = cvCaptureFromCAM(0);
do {
IplImage* img = cvQueryFrame(capture);
cvShowImage("Original", img);
IplImage* img_cpy = cvCreateImage(cvGetSize(img), 8, 3);
img_cpy = cvCloneImage(img);
Filters* filters = new Filters(img_cpy);
IplImage* dst = filters->doSobel();
cvShowImage("Filtered", dst);
/* Be aware that if you release img_cpy here it might not display
* the data on the window. On the other hand, not doing it now will
* cause a memory leak.
*/
//cvReleaseImage( &img_cpy );
cvWaitKey(10);
} while (1);
/* deallocate resources */
cvDestroyWindow("Original");
cvDestroyWindow("Filtered");
cvReleaseCapture(&capture);

Resources