Skin Color Detection in OpenCV - c

I trying to do skin color detection in Opencv.
1) First i converted the image into HSV from RGB
cvCvtColor(frame, hsv, CV_BGR2HSV);
2) Than i had applied skin color threshold in the HSV image
cvInRangeS(hsv, hsv_min, hsv_max, mask); // hsv_min & hsv_max are the value for skin detection
3) So it generates the mash which has only skin color but in Black & white image, so i converted that image in to RGB
cvCvtColor(mask, temp, CV_GRAY2RGB);
4) so now i want the skin color in only RGB value.
for(c = 0; c < frame -> height; c++) {
uchar* ptr = (uchar*) ((frame->imageData) + (c * frame->widthStep));
uchar* ptr2 = (uchar*) ((temp->imageData) + (c * temp->widthStep));
for(d = 0; d < frame -> width; d++) {
if(ptr2[3*d+0] != 255 && ptr2[3*d+1] != 255 && ptr2[3*d+2] != 255 && ptr2[3*d+3] != 255 ){
ptr[3 * d + 0] = 0;
ptr[3 * d + 1] = 0;
ptr[3 * d + 2] = 0;
ptr[3 * d + 3] = 0;
}
}
}
now i am not getting the image that i want actually that has only skin color in RGB.
Any Solution,
Thanks
1st Original Image
2nd Skin Detected Image in Black & White
3rd Output (Not Actual)

you're already quite close.
given, you have a 3 channel mask already:
Mat mask, temp;
cv::cvtColor(mask, temp, CV_GRAY2RGB);
all you need to do is combine it with your original image to mask out all non-skin color:
(and no, don't write [error prone] loops there, better rely on the builtin functionality !)
Mat draw = frame & temp; // short for bitwise_and()
imshow("skin",draw);
waitKey();

Alternatively, without having to convert mask to RGB, you could .copyTo() passing the mask parameter
cv::cvtColor(inputFrame, hsv, CV_BGR2HSV);
cv::inRange(hsv, hsv_min, hsv_max, mask);
cv::Mat outputFrame;
inputFrame.copyTo(outputFrame, mask);

Related

Converting from pgm to pbm but getting wrong output

I've written a program that takes a PGM image as input and converts it to a PBM file. However, the image I get as output is incorrect.
I determine if a pixel is white if its value is bigger than (max+1)/2 then use putchar() to place the character with the value 0 and if its black 1(ive also tried max instead of 1 and 255). However, I get a bunch of vertical lines as output. I'm running this in a Linux terminal with the command :
./prog < image1.pgm > image2.pbm
This is the function I'm using to read and transform the image (where size is height and width, and max is the max value of each pixel):
void p5_to_p4(int size, int max){
int g1,g2,g3;
int i;
for(i=0; i<size; i++){
g1=getchar();
g2=getchar();
g3=getchar();
if (g1>((max+1)/2)){
putchar(0);
putchar(0);
putchar(0);
}
else {
putchar(max);
putchar(max);
putchar(max);
}
}
}
this is the output image im getting(in jpeg form): output when this is what i should be getting correct output
I've written a program that takes a PGM image as input and converts it to a PBM file. However, the image I get as output is incorrect.
Not surprising. Taking the function presented to be for converting a pixel raster from the format of NetPBM P5 ("PGM") to the pixel raster format of NetPBM P4 ("PBM"), the function has at least these issues:
PGM files can use either one-byte or two-byte samples, depending on the maximum sample value specified, but the function presented neither adapts to the maximum sample value nor assumes either of the two valid sample sizes. Instead, it assumes three-byte samples. Perhaps it is supposing three color channels, but PGM has only one.
PBM files use one byte per eight pixels, but the function outputs three bytes per one pixel.
So, first, read the samples correctly. In that regard, do note that if you have to handle two-byte samples then they are stored most-significant byte first, which is probably opposite to your machine's native byte order.
Second, you'll need to pack the output 8 one-bit samples per byte. Read the linked specs for details if you need them. Note that if the number of samples is not divisible by eight then you'll need to add one or more dummy bits to the last byte.
PBM file format packs 8 pixels into a byte in the order of msb first
and lsb last.
If the width of the image is not multiple of 8, the last odd pixels
are packed into a byte without wrapping around to the next line.
Then you need to let the function know width and height individually,
not the total size.
Then the converter function will look like:
void p5_to_p4(int width, int height, int max)
{
int g, b; // g for gray, b for binary
int i, j, k;
for (i = 0; i < height; i++) {
for (j = 0; j < width / 8; j++) {
b = 0;
for (k = 0; k < 8; k++) { // process 8 pixels at a time
g = getchar();
if (max > 255) { // in case of 2 bytes per pixel pgm
g = (g << 8) + getchar();
}
b <<= 1;
if (g < (max + 1) / 2) {
b |= 1; // raise the bit for a dark pixel
}
}
putchar(b);
}
if (width % 8) { // handle odd pixels, if any
b = 0;
for (k = 0; k < width % 8; k++) {
g = getchar();
if (max > 255) {
g = (g << 8) + getchar();
}
b <<= 1;
if (g < (max + 1) / 2) {
b |= 1;
}
}
b <<= 8 - width % 8; // justify to the msb
putchar(b);
}
}
}

RGB to YUV conversion with libav (ffmpeg) triplicates image

I'm building a small program to capture the screen (using X11 MIT-SHM extension) on video. It works well if I create individual PNG files of the captured frames, but now I'm trying to integrate libav (ffmpeg) to create the video and I'm getting... funny results.
The furthest I've been able to reach is this. The expected result (which is a PNG created directly from the RGB data of the XImage file) is this:
However, the result I'm getting is this:
As you can see the colors are funky and the image appears cropped three times. I have a loop where I capture the screen, and first I generate the individual PNG files (currently commented in the code below) and then I try to use libswscale to convert from RGB24 to YUV420:
while (gRunning) {
printf("Processing frame framecnt=%i \n", framecnt);
if (!XShmGetImage(display, RootWindow(display, DefaultScreen(display)), img, 0, 0, AllPlanes)) {
printf("\n Ooops.. Something is wrong.");
break;
}
// PNG generation
// snprintf(imageName, sizeof(imageName), "salida_%i.png", framecnt);
// writePngForImage(img, width, height, imageName);
unsigned long red_mask = img->red_mask;
unsigned long green_mask = img->green_mask;
unsigned long blue_mask = img->blue_mask;
// Write image data
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
unsigned long pixel = XGetPixel(img, x, y);
unsigned char blue = pixel & blue_mask;
unsigned char green = (pixel & green_mask) >> 8;
unsigned char red = (pixel & red_mask) >> 16;
pixel_rgb_data[y * width + x * 3] = red;
pixel_rgb_data[y * width + x * 3 + 1] = green;
pixel_rgb_data[y * width + x * 3 + 2] = blue;
}
}
uint8_t* inData[1] = { pixel_rgb_data };
int inLinesize[1] = { in_w };
printf("Scaling frame... \n");
int sliceHeight = sws_scale(sws_context, inData, inLinesize, 0, height, pFrame->data, pFrame->linesize);
printf("Obtained slice height: %i \n", sliceHeight);
pFrame->pts = framecnt * (pVideoStream->time_base.den) / ((pVideoStream->time_base.num) * 25);
printf("Frame pts: %li \n", pFrame->pts);
int got_picture = 0;
printf("Encoding frame... \n");
int ret = avcodec_encode_video2(pCodecCtx, &pkt, pFrame, &got_picture);
// int ret = avcodec_send_frame(pCodecCtx, pFrame);
if (ret != 0) {
printf("Failed to encode! Error: %i\n", ret);
return -1;
}
printf("Succeed to encode frame: %5d - size: %5d\n", framecnt, pkt.size);
framecnt++;
pkt.stream_index = pVideoStream->index;
ret = av_write_frame(pFormatCtx, &pkt);
if (ret != 0) {
printf("Error writing frame! Error: %framecnt \n", ret);
return -1;
}
av_packet_unref(&pkt);
}
I've placed the entire code at this gist. This question right here looks pretty similar to mine, but not quite, and the solution did not work for me, although I think this has something to do with the way the line stride is calculated.
Don't use av_image_alloc use av_frame_get_buffer.
(unrelated to your question, But using avcodec_encode_video2 is considered bad practice now and should be replaced with avcodec_send_frame and avcodec_receive_packet)
In the end, the error was not in the usage of libav but on the code that fills the pixel data from XImage to the rgb vector. Instead of using:
pixel_rgb_data[y * width + x * 3 ] = red;
pixel_rgb_data[y * width + x * 3 + 1] = green;
pixel_rgb_data[y * width + x * 3 + 2] = blue;
I should have used this:
pixel_rgb_data[3 * (y * width + x) ] = red;
pixel_rgb_data[3 * (y * width + x) + 1] = green;
pixel_rgb_data[3 * (y * width + x) + 2] = blue;
Somehow I was multiplying only the the horizontal displacement within the matrix, not the vertical displacement. The moment I changed it, it worked perfectly.

Create Bitmap in C

I'm trying to create a Bitmap that shows the flightpath of a bullet.
int drawBitmap(int height, int width, Point* curve, char* bitmap_name)
{
int image_size = width * height * 3;
int padding = width - (width % 4);
struct _BitmapFileheader_ BMFH;
struct _BitmapInfoHeader_ BMIH;
BMFH.type_[1] = 'B';
BMFH.type_[2] = 'M';
BMFH.file_size_ = 54 + height * padding;
BMFH.reserved_1_ = 0;
BMFH.reserved_2_ = 0;
BMFH.offset_ = 54;
BMIH.header_size_ = 40;
BMIH.width_ = width;
BMIH.height_ = height;
BMIH.colour_planes_ = 1;
BMIH.bit_per_pixel_ = 24;
BMIH.compression_ = 0;
BMIH.image_size_ = image_size + height * padding;
BMIH.x_pixels_per_meter_ = 2835;
BMIH.y_pixels_per_meter_ = 2835;
BMIH.colours_used_ = 0;
BMIH.important_colours_ = 0;
writeBitmap(BMFH, BMIH, curve, bitmap_name);
}
void* writeBitmap(struct _BitmapFileheader_ file_header,
struct _BitmapInfoHeader_ file_infoheader, void* pixel_data, char* file_name)
{
FILE* image = fopen(file_name, "w");
fwrite((void*)&file_header, 1, sizeof(file_header), image);
fwrite((void*)&file_infoheader, 1, sizeof(file_infoheader), image);
fwrite((void*)pixel_data, 1, sizeof(pixel_data), image);
fclose(image);
return 0;
}
Curve is the return value from the function which calculates the path. It points at an array of Points, which is a struct of x and y coordinates.
I don't really know how to "put" the data into the Bitmap correctly.
I just started programming C recently and I'm quite lost at the moment.
You already know about taking up any slack space in each pixel row, but I see a problem in your calculation. Each pixel row must have length % 4 == 0. So with 3 bytes per pixel (24-bit)
length = ((3 * width) + 3) & -4; // -4 as I don't know the int size, say 0xFFFFFFFC
Look up the structure of a bitmap - perhaps you already have. Declare (or allocate) an image byte array size height * length and fill it with zeros. Parse the bullet trajectory and find the range of x and y coordinates. Scale these to the bitmap size width and height. Now parse the bullet trajectory again, scaling the coordinates to xx and yy, and write three 0xFF bytes (you specified 24-bit colour) into the correct place in the array for each bullet position.
if (xx >= 0 && xx < width && yy >= 0 && yy < height) {
index = yy * length + xx * 3;
bitmap [index] = 0xFF;
bitmap [index + 1] = 0xFF;
bitmap [index + 2] = 0xFF;
}
Finally save the bitmap info, header and image data to file. When that works, you can refine your use of colour.

Apply patch between gif frames

I want to extract gif frames to raw BGRA data, I used giflib to parse format. I've got first frame (I suppose it's like a key frame in video) that looks good and second (it's 15 frames actually, but let's simplify it) that looks like diff frame. Here is samples:
It seems to be simple to restore second full frame using diff data, but here is the problem: I have no idea what color index means "Not changed". Black pixels on diff frame is actually black – its index in color map is 255, which is rgb(0,0,0). I printed whole color table and didn't found any other black entries. It's here if interested. "BackgroundColor" index is 193, so it makes no sense either.
So, I can't separate "black" color from "no" color. What if second frame will really contain some new black pixels (it contains indeed because left eye moves on animation)? But program should handles it differently: get previous frame color for "no" color and get rgb(0,0,0) for "black" color.
UPD: here is my code. Subframes handling and memory cleanup is ommited. Here I supposed that "no" color index is last in colortable. It works actually for my test file, but I'm not sure it will work in general.
DGifSlurp(image);
int* master = malloc(image->SWidth * image->SHeight * sizeof(int));
for (int i = 0; i < image->ImageCount; i++) {
SavedImage* frame = &image->SavedImages[i];
ColorMapObject* cmap = frame->ImageDesc.ColorMap ? frame->ImageDesc.ColorMap : image->SColorMap;
int nocoloridx = cmap->ColorCount - 1;
IplImage* mat = cvCreateImage(cvSize(frame->ImageDesc.Width, frame->ImageDesc.Height), IPL_DEPTH_8U, 4);
mat->imageData = malloc(frame->ImageDesc.Width * frame->ImageDesc.Height * 4);
for (int y = 0; y < frame->ImageDesc.Height; y++)
for (int x = 0; x < frame->ImageDesc.Width; x++) {
int offset = y * frame->ImageDesc.Width + x;
int coloridx = frame->RasterBits[offset];
if (coloridx == nocoloridx) {
coloridx = master[offset];
} else {
master[offset] = coloridx;
}
GifColorType color = cmap->Colors[coloridx];
cvSetComponent(mat, x, y, 0, color.Blue);
cvSetComponent(mat, x, y, 1, color.Green);
cvSetComponent(mat, x, y, 2, color.Red);
cvSetComponent(mat, x, y, 3, 100);
}
cvNamedWindow("t", CV_WINDOW_AUTOSIZE);
cvShowImage("t", mat);
cvWaitKey(0);
}

WPF WriteableBitmap problems with transparency

First of all Ill explain what I try to do, just in the case that someone comes up with a better approach
I need to blend too images using the "color" stile from photoshop (you know, the blending methods: Screen, hard light, color ... )
So, I have my base image (a png) and WriteableBitmap generated on execution time (lets call it color mask). Then I need to blend this two images using the "color" method and show the result in a UI component.
So far what Im trying is just to draw things on the WriteableBitmap but I'm facing unexpected behavior with the alpha channel.
My code so far:
// variables declaration
WriteableBitmap img = new WriteableBitmap(width, height, 96,96,PixelFormats.Bgra32,null);
pixels = new uint[width * height];
//function for setting the color of one pixel
private void SetPixel(int x, int y, Color c)
{
int pixel = width * y + x;
int red = c.R;
int green = c.G;
int blue = c.B;
int alpha = c.A;
pixels[pixel] = (uint)((blue << 24) + (green << 16) + (red << 8) + alpha);
}
//function for paint all the pixels of the image
private void Render()
{
Color c = new Color();
c.R = 255; c.G = 255; c.B = 255; c.A = 50;
for (int y = 0; y < height; y++)
for (int x = 0; x < width; x++)
SetPixel(x, y, c);
img.WritePixels(new Int32Rect(0, 0, width, height), pixels, width * 4, 0);
image1.Source = img; // image1 is a WPF Image in my XAML
}
Whenever I run the code with the color c.A = 255 I get the expected results. The whole Image is set to the desired color. But if I set c.A to a different value I get weird things.
If I set the color to BRGA = 0,0,255,50 I get a dark blue almost black. If I set it to BRGA = 255,255,255,50 I get a yellow ...
Any clue !?!?!
Thanks in advance !
Change the order of your colour components to
pixels[pixel] = (uint)((alpha << 24) + (red << 16) + (green << 8) + blue);

Resources