compute dct in opencv using cvDCT function - c

I am using OpenCV 2.1 with vs2010(coding in C). After extracting the blue plane from a rgb image, I applied dct to it to get the transformed matrix.
cvDCT(source,destination,CV_DXT_FORWARD);
It is successfully building, but somehow it is not executing
The error is like "Unhandled exception at 0x75c89617 in freqDomain.exe: Microsoft C++ exception: cv::Exception at memory location 0x001ce35c.."
I think the error is in setting the type of cvarray of output image. is it okay to set it to IPL_DEPTH_8U or should it be float?
This is my code snippet:
int main()
{
IplImage *input,*output,*b,*g,*r;
input=cvLoadImage("dolphin.jpg");
int width,height;
width=input->width;
height=input->height;
b=cvCreateImage(cvSize(width,height),IPL_DEPTH_8U,1);
g=cvCreateImage(cvSize(width,height),IPL_DEPTH_8U,1);
r=cvCreateImage(cvSize(width,height),IPL_DEPTH_8U,1);
cvSplit(input,b,g,r,NULL);
cvNamedWindow("blue",CV_WINDOW_AUTOSIZE);
IplImage *b_dct,*g_dct,*r_dct;
b_dct=cvCreateImage(cvSize(width,height),8,1);
g_dct=cvCreateImage(cvSize(width,height),8,1);
r_dct=cvCreateImage(cvSize(width,height),8,1);
cvDCT(b,b_dct,0); // doubt??
cvShowImage("blue",b_dct);
...

yeah found the solution :)
the problem was with the datatype of source image. it should be float or double..
I used cvConvert function to convert from unsigned int to 32 bit float values.

Related

OpenCV Tiff Wrong Color Values Readout

I have a 16-bit tiff image with no color profile (camera profile embedded) and I am trying to read its RGB values in OpenCV. However, comparing the output values to the values given when the image is opened by GIMP for example gives totally different values (GIMP being opened with keeping the image's profile option; no profile conversion). I have tried also another image studio software like CaptureOne and the result accords with GIMP differs from OpenCV output.
Not sure if reading and opening the image in OpenCV is wrong somehow, in spite of using IMREAD_UNCHANGED flag.
I have as well tried to read the image using FreeImage library but still the same result.
Here is a snippet of the code reading pixels' values in OpenCV
const float Conv16_8 = 255.f / 65535.f;
cv::Vec3d curVal;
// upperLeft/lowerRight are just some pre-defined corners for the ROI
for (int row = upperLeft.y; row <= lowerRight.y; row++) {
unsigned char* dataUCPtr = img.data + row * img.step[0];
unsigned short* dataUSPtr = (unsigned short*)dataUCPtr;
dataUCPtr += 3 * upperLeft.x;
dataUSPtr += 3 * upperLeft.x;
for (int col = upperLeft.x; col <= lowerRight.x; col++) {
if (/* some check if the pixel is valid */) {
if (img.depth() == CV_8U) {
for (int chan = 0; chan < 3; chan++) {
curVal[chan] = *dataUCPtr++;
}
}
else if (img.depth() == CV_16U) {
for (int chan = 0; chan < 3; chan++) {
curVal[chan] = (*dataUSPtr++)*Conv16_8;
}
}
avgX += curVal;
}
else {
dataUCPtr += 3;
dataUSPtr += 3;
}
}
}
and here is the image (download the image) I am reading with its RGB readouts in
CaptureOne Studio AdobeRGB:
vs OpenCV RGB (A1=white --> F1=Black):
PS1: I have tried also to change the output color space in GIMP/CaptureOne to sRGB but still the difference is almost the same, not any closer to OpenCV
PS2: I am reversing OpenCV imread channels' order before extracting the RGB values from the image COLOR_RGB2BGR
OP said:
I have a 16-bit tiff image with no color profile (camera profile embedded)
Well no, your image definitely has a color profile, and it should not be ignored. The embedded profile is as important as the numeric values of each pixel. Without a defined profile, the pixel values are somewhat meaningless.
From what I can tell, OpenCV does not linearize gamma by default... except when it does... Regardless, the gamma indicated in the profile is unique:
Now compare that to sRGB:
So the sRGB transformations can't be used.
If you are looking for performance, applying the curve via LUT is usually more efficient than a full-on color management system.
In this case, using a LUT. The following LUT was taken from the color profile, 16bit values, and 256 steps:
// Phase One TRC from color profile
profileTRC = [0x0000,0x032A,0x0653,0x097A,0x0CA0,0x0FC2,0x12DF,0x15F8,0x190C,0x1C19,0x1F1E,0x221C,0x2510,0x27FB,0x2ADB,0x2DB0,0x3079,0x3334,0x35E2,0x3882,0x3B11,0x3D91,0x4000,0x425D,0x44A8,0x46E3,0x490C,0x4B26,0x4D2F,0x4F29,0x5113,0x52EF,0x54BC,0x567B,0x582D,0x59D1,0x5B68,0x5CF3,0x5E71,0x5FE3,0x614A,0x62A6,0x63F7,0x653E,0x667B,0x67AE,0x68D8,0x69F9,0x6B12,0x6C23,0x6D2C,0x6E2D,0x6F28,0x701C,0x710A,0x71F2,0x72D4,0x73B2,0x748B,0x755F,0x762F,0x76FC,0x77C6,0x788D,0x7951,0x7A13,0x7AD4,0x7B93,0x7C51,0x7D0F,0x7DCC,0x7E8A,0x7F48,0x8007,0x80C8,0x8189,0x824C,0x8310,0x83D5,0x849B,0x8562,0x862B,0x86F4,0x87BF,0x888A,0x8956,0x8A23,0x8AF2,0x8BC0,0x8C90,0x8D61,0x8E32,0x8F04,0x8FD7,0x90AA,0x917E,0x9252,0x9328,0x93FD,0x94D3,0x95AA,0x9681,0x9758,0x9830,0x9908,0x99E1,0x9ABA,0x9B93,0x9C6C,0x9D45,0x9E1F,0x9EF9,0x9FD3,0xA0AD,0xA187,0xA260,0xA33A,0xA414,0xA4EE,0xA5C8,0xA6A1,0xA77B,0xA854,0xA92D,0xAA05,0xAADD,0xABB5,0xAC8D,0xAD64,0xAE3B,0xAF11,0xAFE7,0xB0BC,0xB191,0xB265,0xB339,0xB40C,0xB4DE,0xB5B0,0xB680,0xB750,0xB820,0xB8EE,0xB9BC,0xBA88,0xBB54,0xBC1F,0xBCE9,0xBDB1,0xBE79,0xBF40,0xC005,0xC0CA,0xC18D,0xC24F,0xC310,0xC3D0,0xC48F,0xC54D,0xC609,0xC6C5,0xC780,0xC839,0xC8F2,0xC9A9,0xCA60,0xCB16,0xCBCA,0xCC7E,0xCD31,0xCDE2,0xCE93,0xCF43,0xCFF2,0xD0A0,0xD14D,0xD1FA,0xD2A5,0xD350,0xD3FA,0xD4A3,0xD54B,0xD5F2,0xD699,0xD73E,0xD7E3,0xD887,0xD92B,0xD9CE,0xDA6F,0xDB11,0xDBB1,0xDC51,0xDCF0,0xDD8F,0xDE2C,0xDEC9,0xDF66,0xE002,0xE09D,0xE138,0xE1D2,0xE26B,0xE304,0xE39C,0xE434,0xE4CB,0xE562,0xE5F8,0xE68D,0xE722,0xE7B7,0xE84B,0xE8DF,0xE972,0xEA04,0xEA97,0xEB29,0xEBBA,0xEC4B,0xECDC,0xED6C,0xEDFC,0xEE8B,0xEF1A,0xEFA9,0xF038,0xF0C6,0xF154,0xF1E1,0xF26F,0xF2FC,0xF388,0xF415,0xF4A1,0xF52D,0xF5B9,0xF645,0xF6D0,0xF75B,0xF7E6,0xF871,0xF8FC,0xF987,0xFA11,0xFA9B,0xFB26,0xFBB0,0xFC3A,0xFCC4,0xFD4E,0xFDD7,0xFE61,0xFEEB,0xFF75,0xFFFF]
If a matching array of the linearized values was needed, it would be
[0x0000,0x0101,0x0202,0x0303,0x0404....
But such an array is not neded for most uses, as the index value of the PhaseOne TRC array directly relates to the linear value.
I.e. phaseOneTRC[0x80] is 0xAD64
and the linear value is 0x80 * 0x101.
It turned out, it is all about having, loading and applying the proper ICC profile on the cv::Mat data. To do that one must use a color management engine along side OpenCV such as LittleCMS.

OpenCV, cvCvtColor : null array pointer is passed in function cvgetmat

I'm beginning with openCV, I have to use it for a project at school. I'm using CodeBlocks on windows.
I am trying to write a very simple function that convert a image in RGB format to an HSV format, then display the Hue channel.
long traiter_image(IplImage* Image)
{
IplImage* ImHSV = 0;
IplImage* chans[3];
cvCvtColor(Image, ImHSV, CV_BGR2HSV); // BGR to HSV
// split channels
cvSplit (ImHSV, chans[0], chans[1], chans[2], NULL);
Afficher("Teinte",chans[0]); // Display Hue
return 0;
}
I don't have any building errors, but when I execute the code, a windows appears, telling me that "null array pointer is passed in function cvgetmat". The problem comes from the cvCvtColor function, but I don't know how to fix it...
Before calling cvCvtColor(), you should create the memory for the output image, which should be of the same size and depth as the input image.
For your case, it should be:
IplImage* ImHSV = cvCreateImage(cvGetSize(Image), IPL_DEPTH_8U, 3);
cvCvtColor(Image, ImHSV, CV_BGR2HSV); // BGR to HSV

unsigned char pixel_intensity[] to image; C code, Linux

I have a data array of pixel intensity (e.g. unsigned char pixel_intensity[4] = {0, 255, 255, 0}) and I need to create image in C code on Linux (Raspberry Pi).
What is the easiest way to do it?
I would suggest using the netpbm format as it is very easy to program. It is documented here and here.
I have written a little demonstration of how to write a simple greyscale ramp to a 100x256 image below.
#include <stdio.h>
#include <stdlib.h>
int main(){
FILE *imageFile;
int x,y,pixel,height=100,width=256;
imageFile=fopen("image.pgm","wb");
if(imageFile==NULL){
perror("ERROR: Cannot open output file");
exit(EXIT_FAILURE);
}
fprintf(imageFile,"P5\n"); // P5 filetype
fprintf(imageFile,"%d %d\n",width,height); // dimensions
fprintf(imageFile,"255\n"); // Max pixel
/* Now write a greyscale ramp */
for(x=0;x<height;x++){
for(y=0;y<width;y++){
pixel=y;
fputc(pixel,imageFile);
}
}
fclose(imageFile);
}
The header of the image looks like this:
P5
256 100
255
<binary data of pixels>
And the image looks like this (I have made it into a JPEG for rendering on here)
Once you have an image, you can use the superb ImageMagick (here) tools to convert the image to anything else you like, e.g. if you want the greyscale created by the above converted into a JPEG, just use ImageMagick like this:
convert image.pgm image.jpg
Or, if you want a PNG
convert image.pgm image.png
You can actually use the PGM format images directly on the web, by convention, the MIME type is image/x-portable-graymap

SIFT Assertion Failed error

I'm trying to use SIFT to match two images and i'm using the code below:
cv::initModule_nonfree();
cv::Mat matFrame(frame);
cv::Mat matFrameAnt(frameAnterior);
cv::SiftFeatureDetector detector(400); //I've tried different values here
cv::SiftDescriptorExtractor extractor(400); //but i get always the same error
std::vector<cv::KeyPoint> keypoints1;
std::vector<cv::KeyPoint> keypoints2;
detector.detect( matFrame, keypoints1 );
detector.detect( matFrameAnt, keypoints2 );
cv::Mat feat1;
cv::Mat feat2;
cv::Mat descriptor1;
cv::Mat descriptor2;
extractor.compute( matFrame, keypoints1, descriptor1 );
extractor.compute( matFrameAnt, keypoints2, descriptor2 );
std::vector<cv::DMatch> matches;
cv::BFMatcher matcher(cv::NORM_L2, false);
matcher.match(descriptor1,descriptor2, matches);
cv::Mat result;
cv::drawMatches( matFrame, keypoints1, matFrameAnt, keypoints2, matches, result );
cv::namedWindow("SIFT", CV_WINDOW_AUTOSIZE );
cv::imshow("SIFT", result);
I get this error when i run the code (it compiles perfectly).
"OpenCV Error: Assertion failed (firstOctave >= -1 && actualNlayers <= nOctaveLayers) in unknown function, file ......\src\opencv\modules\nonfree\src\sift.cpp, line 755".
I understand that that function is getting a non positive value, so i printed all the possible values from my code and i found out that the size of my two keypoints vectors are -616431 and -616422.
The two images i'm using are black&white images, with a black blackground and my hand (white) in the middle of it.
What's happening? Am i using not valid images? Am i using the functions cv::SiftFeatureDetector and cv::SiftDescriptorExtractor wrong?
It seems you have no clue what you are doing. This feature is fairly undocumented, therefore try digging into the source, or let me tell you what you did.
cv::SiftFeatureDetector detector(50)
This means you will get at most 50 matches.
cv::SiftDescriptorExtractor extractor(400);
This means your magnification for extration is 400x. This parameter should be in the order of "1" for normal results.
The rest of the documentation is here: http://docs.opencv.org/2.3/modules/features2d/doc/common_interfaces_of_feature_detectors.html#SiftFeatureDetector

OpenCV: Resizing an Image

I seem to miss something but I cannot understand how to resize an image. Here is code:
#include <opencv2\core\core.hpp>
#include <opencv2\imgproc\imgproc.hpp>
#include <opencv2\highgui\highgui.hpp>
using namespace cv;
int main(int argc, char* argv[])
{
IplImage* src = NULL;
IplImage* dst = NULL;
src = cvLoadImage("image.tif");
dst = cvCreateImage(cvSize( src->width / 10, src->height / 10 ), src->depth, src->nChannels );
resize(src, dst, dst->nSize, 0.1, 0.1, CV_INTER_AREA );
return 0;
}
But this code only results into the compiler error:
error C2664: 'cv::resize' : cannot convert parameter 1 from 'IplImage *' to 'cv::InputArray'
Can someone tell me what's wrong here? I mean how can I create an InputArray from an IplImage?
Thanks,
Christian
You are mixing up OpenCV C and C++ functions. If you are programming in C++ you should use the Mat class to store image data. If you are on the other hand using pure C you should use the function cvResize to resize your IplImage.
As you can see in the OpenCV API documentation, there is a C and C++ programming interface for every function. They are essentially doing the same and you can of course use the C functions in C++, but you cannot use C OpenCV structs (like IplImage) with C++ OpenCV functions (like resize()).
This introduction describes the basic concepts of the OpenCV C++ API.

Resources