openCV k-means call assertion failed - c

I'm have read c++ sample from samples folder of openCV source distribution, and, if omit random picture generation, kmeans call looks pretty simple – author even doesn't allocate centers/labels arrays (you can find it here). However, I can't do the same in C. If I don't allocate labels, I get assertion error:
OpenCV Error: Assertion failed (labels.isContinuous() && labels.type()
== CV_32S && (labels.cols == 1 || labels.rows == 1) && labels.cols + labels.rows - 1 == data.rows) in cvKMeans2, file
/tmp/opencv-xiht/opencv-2.4.9/modules/core/src/matrix.cpp, line 3094
Ok, I tried to create empty labels matrix, but assertion message don't changes at all.
IplImage* image = cvLoadImage("test.jpg", -1);
IplImage* normal = cvCreateImage(cvGetSize(image), IPL_DEPTH_32F, image->nChannels);
cvConvertScale(image, normal, 1/255.0, 0);
CvMat* points = cvCreateMat(image->width, image->height, CV_32F);
points->data.fl = normal->imageData;
CvMat* labels = cvCreateMat(1, points->cols, CV_32S);
CvMat* centers = NULL;
CvTermCriteria criteria = cvTermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 10, 1.0);
// KMEANS_PP_CENTERS is undefined
int KMEANS_PP_CENTERS = 2;
cvKMeans2(points, 4, labels, criteria, 3, NULL, KMEANS_PP_CENTERS, centers, 0);
The thing that drives me nuts:
CvMat* labels = cvCreateMat(1, points->cols, CV_32S);
int good = labels->type == CV_32S; // FALSE here
It's obviously one (not sure if the only) issue that causes assertion fail. How this supposed to work? I can't use С++ API since whole application is in plain C.

the assertion tells you:
type must be CV_32S which seems to be the case in your code, maybe your if-statement is false because the type is changed to CV_32SC1 automatically? no idea...
you can either place each point in a row or in a column, so rows/cols is set to 1 and the other dimension must be set to data.rows which indicates that data holds the points you want to cluster in the format that each point is placed in a row, leading to #points rows. So your error seems to be CvMat* labels = cvCreateMat(1, points->cols, CV_32S); which should be CvMat* labels = cvCreateMat(1, points->rows, CV_32S); instead, to make the assertion go away, but your use of points seems to be conceptually wrong.
You probably have to hold your points (you want to cluster) in a cvMat with n rows and 2 cols of type CV_32FC1 or 1 col and type CV_32FC2 (maybe both versions work, maybe only one, or maybe I'm wrong there at all).
edit: I've written a short code snippet that works for me:
// here create the data array where your input points will be hold:
CvMat* points = cvCreateMat( numberOfPoints , 2 /* 2D points*/ , CV_32F);
// this is a float array of the
float* pointsDataPtr = points->data.fl;
// fill the mat:
for(unsigned int r=0; r<samples.size(); ++r)
{
pointsDataPtr[2*r] = samples.at(r).x; // this is the x coordinate of your r-th point
pointsDataPtr[2*r+1] = samples.at(r).y; // this is the y coordinate of your r-th point
}
// this is the data array for the labels, which will be the output of the method.
CvMat* labels = cvCreateMat(1, points->rows, CV_32S);
// this is the quit criteria, which I did neither check nor modify, just used your version here.
CvTermCriteria criteria = cvTermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 10, 1.0);
// call the method for 2 cluster
cvKMeans2(points, 2, labels, criteria);
// now labels holds numberOfPoints labels which have either value 0 or 1 since we searched for 2 cluster
int* labelData = labels->data.i; // array to the labels
for(unsigned int r=0; r<samples.size(); ++r)
{
int labelOfPointR = labelData[r]; // this is the value of the label of point number r
// here I use c++ API to draw the points, do whatever else you want to do with the label information (in C API). I choose different color for different labels.
cv::Scalar outputColor;
switch(labelOfPointR)
{
case 0: outputColor = cv::Scalar(0,255,0); break;
case 1: outputColor = cv::Scalar(0,0,255); break;
default: outputColor = cv::Scalar(255,0,255); break; // this should never happen for 2 clusters...
}
cv::circle(outputMat, samples.at(r), 2, outputColor);
}
giving me this result for some generated point data:
Maybe you need the centers too, the C API gives you the option to return them, but didnt check how it works.

Related

OpenCV Tiff Wrong Color Values Readout

I have a 16-bit tiff image with no color profile (camera profile embedded) and I am trying to read its RGB values in OpenCV. However, comparing the output values to the values given when the image is opened by GIMP for example gives totally different values (GIMP being opened with keeping the image's profile option; no profile conversion). I have tried also another image studio software like CaptureOne and the result accords with GIMP differs from OpenCV output.
Not sure if reading and opening the image in OpenCV is wrong somehow, in spite of using IMREAD_UNCHANGED flag.
I have as well tried to read the image using FreeImage library but still the same result.
Here is a snippet of the code reading pixels' values in OpenCV
const float Conv16_8 = 255.f / 65535.f;
cv::Vec3d curVal;
// upperLeft/lowerRight are just some pre-defined corners for the ROI
for (int row = upperLeft.y; row <= lowerRight.y; row++) {
unsigned char* dataUCPtr = img.data + row * img.step[0];
unsigned short* dataUSPtr = (unsigned short*)dataUCPtr;
dataUCPtr += 3 * upperLeft.x;
dataUSPtr += 3 * upperLeft.x;
for (int col = upperLeft.x; col <= lowerRight.x; col++) {
if (/* some check if the pixel is valid */) {
if (img.depth() == CV_8U) {
for (int chan = 0; chan < 3; chan++) {
curVal[chan] = *dataUCPtr++;
}
}
else if (img.depth() == CV_16U) {
for (int chan = 0; chan < 3; chan++) {
curVal[chan] = (*dataUSPtr++)*Conv16_8;
}
}
avgX += curVal;
}
else {
dataUCPtr += 3;
dataUSPtr += 3;
}
}
}
and here is the image (download the image) I am reading with its RGB readouts in
CaptureOne Studio AdobeRGB:
vs OpenCV RGB (A1=white --> F1=Black):
PS1: I have tried also to change the output color space in GIMP/CaptureOne to sRGB but still the difference is almost the same, not any closer to OpenCV
PS2: I am reversing OpenCV imread channels' order before extracting the RGB values from the image COLOR_RGB2BGR
OP said:
I have a 16-bit tiff image with no color profile (camera profile embedded)
Well no, your image definitely has a color profile, and it should not be ignored. The embedded profile is as important as the numeric values of each pixel. Without a defined profile, the pixel values are somewhat meaningless.
From what I can tell, OpenCV does not linearize gamma by default... except when it does... Regardless, the gamma indicated in the profile is unique:
Now compare that to sRGB:
So the sRGB transformations can't be used.
If you are looking for performance, applying the curve via LUT is usually more efficient than a full-on color management system.
In this case, using a LUT. The following LUT was taken from the color profile, 16bit values, and 256 steps:
// Phase One TRC from color profile
profileTRC = [0x0000,0x032A,0x0653,0x097A,0x0CA0,0x0FC2,0x12DF,0x15F8,0x190C,0x1C19,0x1F1E,0x221C,0x2510,0x27FB,0x2ADB,0x2DB0,0x3079,0x3334,0x35E2,0x3882,0x3B11,0x3D91,0x4000,0x425D,0x44A8,0x46E3,0x490C,0x4B26,0x4D2F,0x4F29,0x5113,0x52EF,0x54BC,0x567B,0x582D,0x59D1,0x5B68,0x5CF3,0x5E71,0x5FE3,0x614A,0x62A6,0x63F7,0x653E,0x667B,0x67AE,0x68D8,0x69F9,0x6B12,0x6C23,0x6D2C,0x6E2D,0x6F28,0x701C,0x710A,0x71F2,0x72D4,0x73B2,0x748B,0x755F,0x762F,0x76FC,0x77C6,0x788D,0x7951,0x7A13,0x7AD4,0x7B93,0x7C51,0x7D0F,0x7DCC,0x7E8A,0x7F48,0x8007,0x80C8,0x8189,0x824C,0x8310,0x83D5,0x849B,0x8562,0x862B,0x86F4,0x87BF,0x888A,0x8956,0x8A23,0x8AF2,0x8BC0,0x8C90,0x8D61,0x8E32,0x8F04,0x8FD7,0x90AA,0x917E,0x9252,0x9328,0x93FD,0x94D3,0x95AA,0x9681,0x9758,0x9830,0x9908,0x99E1,0x9ABA,0x9B93,0x9C6C,0x9D45,0x9E1F,0x9EF9,0x9FD3,0xA0AD,0xA187,0xA260,0xA33A,0xA414,0xA4EE,0xA5C8,0xA6A1,0xA77B,0xA854,0xA92D,0xAA05,0xAADD,0xABB5,0xAC8D,0xAD64,0xAE3B,0xAF11,0xAFE7,0xB0BC,0xB191,0xB265,0xB339,0xB40C,0xB4DE,0xB5B0,0xB680,0xB750,0xB820,0xB8EE,0xB9BC,0xBA88,0xBB54,0xBC1F,0xBCE9,0xBDB1,0xBE79,0xBF40,0xC005,0xC0CA,0xC18D,0xC24F,0xC310,0xC3D0,0xC48F,0xC54D,0xC609,0xC6C5,0xC780,0xC839,0xC8F2,0xC9A9,0xCA60,0xCB16,0xCBCA,0xCC7E,0xCD31,0xCDE2,0xCE93,0xCF43,0xCFF2,0xD0A0,0xD14D,0xD1FA,0xD2A5,0xD350,0xD3FA,0xD4A3,0xD54B,0xD5F2,0xD699,0xD73E,0xD7E3,0xD887,0xD92B,0xD9CE,0xDA6F,0xDB11,0xDBB1,0xDC51,0xDCF0,0xDD8F,0xDE2C,0xDEC9,0xDF66,0xE002,0xE09D,0xE138,0xE1D2,0xE26B,0xE304,0xE39C,0xE434,0xE4CB,0xE562,0xE5F8,0xE68D,0xE722,0xE7B7,0xE84B,0xE8DF,0xE972,0xEA04,0xEA97,0xEB29,0xEBBA,0xEC4B,0xECDC,0xED6C,0xEDFC,0xEE8B,0xEF1A,0xEFA9,0xF038,0xF0C6,0xF154,0xF1E1,0xF26F,0xF2FC,0xF388,0xF415,0xF4A1,0xF52D,0xF5B9,0xF645,0xF6D0,0xF75B,0xF7E6,0xF871,0xF8FC,0xF987,0xFA11,0xFA9B,0xFB26,0xFBB0,0xFC3A,0xFCC4,0xFD4E,0xFDD7,0xFE61,0xFEEB,0xFF75,0xFFFF]
If a matching array of the linearized values was needed, it would be
[0x0000,0x0101,0x0202,0x0303,0x0404....
But such an array is not neded for most uses, as the index value of the PhaseOne TRC array directly relates to the linear value.
I.e. phaseOneTRC[0x80] is 0xAD64
and the linear value is 0x80 * 0x101.
It turned out, it is all about having, loading and applying the proper ICC profile on the cv::Mat data. To do that one must use a color management engine along side OpenCV such as LittleCMS.

nanopb (Protocol Buffers library) repeated sub-messages encode

we are using the nanopb library as our Protocol Buffers library. We defined the following messages:
simple.proto:
syntax = "proto2";
message repField {
required float x = 1;
required float y = 2;
required float z = 3;
}
message SimpleMessage {
required float lucky_number = 1;
repeated repField vector = 2;
}
with simple.options
SimpleMessage.vector max_count:300
So we know the repField has a fixed size of 300 and thus defining it as such.
Parts of the generated one looks like:
simple.pb.c:
const pb_field_t repField_fields[4] = {
PB_FIELD( 1, FLOAT , REQUIRED, STATIC , FIRST, repField, x, x, 0),
PB_FIELD( 2, FLOAT , REQUIRED, STATIC , OTHER, repField, y, x, 0),
PB_FIELD( 3, FLOAT , REQUIRED, STATIC , OTHER, repField, z, y, 0),
PB_LAST_FIELD
};
const pb_field_t SimpleMessage_fields[3] = {
PB_FIELD( 1, FLOAT , REQUIRED, STATIC , FIRST, SimpleMessage, lucky_number, lucky_number, 0),
PB_FIELD( 2, MESSAGE , REPEATED, STATIC , OTHER, SimpleMessage, vector, lucky_number, &repField_fields),
PB_LAST_FIELD
};
and part of simple.pb.h:
/* Struct definitions */
typedef struct _repField {
float x;
float y;
float z;
/* ##protoc_insertion_point(struct:repField) */
} repField;
typedef struct _SimpleMessage {
float lucky_number;
pb_size_t vector_count;
repField vector[300];
/* ##protoc_insertion_point(struct:SimpleMessage) */
} SimpleMessage;
We try to encode the message by doing:
// Init message
SimpleMessage message = SimpleMessage_init_zero;
pb_ostream_t stream = pb_ostream_from_buffer(buffer, sizeof(buffer));
// Fill in message
[...]
// Encode message
status = pb_encode(&stream, SimpleMessage_fields, &message);
// stream.bytes_written is wrong!
But the stream.bytes_written is wrong which means it is not encoded correctly, although status=1.
In the documentation for pb_encode() it says:
[...] However, submessages must be serialized twice: first to
calculate their size and then to actually write them to output. This
causes some constraints for callback fields, which must return the
same data on every call.
But, we are not sure how to interpret this sentence - what steps to follow exactly to achieve this.
So our question is:
What is the correct way to encode messages that contain fixed-size (repeated) submessages using the nanopb library?
Thank you!
You're not using callback fields here, so that quote doesn't matter for you. But if you were, it would just mean that in some situations your callback would be called multiple times.
Are you the same person as on the forum? Your stack overflow question does not show it, but the person on the forum has a similar problem that appears to be due to not setting vector_count. Then it will remain as 0 length array. So try adding:
message.vector_count = 300;
In the future, please wait a few days before posting the same question in multiple places. It's a waste of volunteer time to answer the same question multiple times.

C Physics engine

I am working on a fabric simulator at low level, I have done some work and at this point there are some points where I would appreciate help.
The input of the program is a DRF file. This is a format used in sewing machines to indicate the needles where to move.
The 2D representation is accurate, I parse the DRF info to a polyline, apply tensions and I extrude this in a openGL render.
Now I am trying to achieve the 3D ZAxis physics. I tried two methods:
1) Asuming information
First method is based on constrains about the process of manufacturing:
we only take care of the knots where a set of yarns interact and comupte z separation in this crucial points. The result is regular: good in a lot of cases but with a lot of botched jobs in order to avoid causistincs where this is not a good assumtion (for example, doing beziers, collisions in zones between this crucial points). We desist on this alternative when we saw there was a lot os causistic we would have to hardcode,probably creating aditional gliche.
2) Custom 2D Engine
The second attempt is to aprox the yarns with box colliders 2D, check collisions with a grid, and compute new Z in funcion of this. This is by far more expensive, but leads to a better results. Although, there is some problems:
The accuracy of box colliders over the yarn is not absolute ( there are ways to solve this but it would be great to read some alternatives)
There is not iterative process:
First we compute collisions pairwise, and add to each collider a list of colliding colliders. Then, we arrange this list and separate the Z axis in function of yarn's radius with center 0.The last step is to smooth the results from discrete z to a beziers or smoothing filters. This leads to another glitches.
If I extend this recomputation to all the collisions of the current collisions, I get weird results because z changes are bad propagated( maybe I am not doing good this point)
Some colliders are recomputed wrong ( first computed yarns have more posibilites to be altered for the last ones, leading on gliches)
This is the result for the secdond approach ( without recompute z's on smoothing step):
And some of the glicthes (mainly in the first yarns computed:
This is the collisions engine:
Details of bad aproximated curves:
At this point I have some questions:
Can I fix this glitches (or a great part at least)?
Glitches about bad aproximation and glitches for z recomputation
A 3D engine like ODE could do the job in a reasonable time?
If you need some specific code don't hestiate on ask for it.
EDIT: Ok, let's narrow the thing.
Yesterday I tried some open source engines without achieving good results.
500 collisions with joints are crashing the simulation. so I discard it.
My problem:
A: How I generate the yarns:
I have a set of points and I trace beziers between them.
CUBIC_BEZIER(p1x,p1y,p1z,p2x,p2y,p2z,p3x,p3y,p3z,p4x,p4y,p4z, npunts);
For each pair of points I add a collider:
p1.x = *(puntlinaux + k*NUMCOMP);
p1.y = *(puntlinaux + k*NUMCOMP + 1);
p2.x = *(puntlinaux + k*NUMCOMP + 4);
p2.y = *(puntlinaux + k*NUMCOMP + 5);
*bc = getCollider(&p1,&p2,x,y,z, radi_mm_fil , pbar->numbar);
where
BoxCollider2D getCollider(punt_st* p1, punt_st* p2, float *x, float *y, float *z, float radiFil,int numbar){
BoxCollider2D bc;
bc.numBar = numbar;
int i;
for(i = 0; i < MAX_COLLIDERS; i++) {
bc.collisions[i] = NULL;
}
bc.isColliding = 0;
bc.nCollisions = 0;
bc.nextCollider = NULL;
bc.lastCollider = NULL;
bc.isProcessed = 0;
punt_st center;
p1->x = p1->x;
p2->x = p2->x;
float distance = distancePunts_st(p1,p2);
bc.angle = atan2((p2->y - p1->y),(p2->x-p1->x));
//bc.angle = 0;
//DEBUG("getCollider\n");
//DEBUG("angle: %f, radiFil: %f\n", bc.angle*360/(2*3.141592), radiFil);
//DEBUG("Point: pre [%f,%f] post:[%f,%f]\n", p1->x, p1->y, p2->x, p2->y);
p1->y = p1->y;
p2->y = p2->y;
bc.r.min = *p1;
bc.r.max = *p2;
center = getCenterRect(bc.r);
bc.r.max.x = (center.x - distance/2);
bc.r.max.y = (center.y + radiFil) - 0.001f;
bc.r.min.x = (center.x + distance/2);
bc.r.min.y = (center.y - radiFil) + 0.001f;
bc.xIni= x;
bc.yIni= y;
bc.zIni= z;
return bc;
}
Then I add the collider to a grid to reduce complexity on comparisons
and check the collisions with Separated Axis Theorem
DEBUG("calc_sapo: checking collisions\n");
checkCollisions();
After that, I resolve the collisions leading on a discrete z aproximation.
DEBUG("calc_sapo: solving collisions\n");
resolveCollisions();
And then I apply a smooth function. This is the point where I am lost.
DEBUG("smoothing yarns\n");
smoothYarns();
To keep it simple, let's assume smoothYarns() does a simple mean value with the last and next z value:
for each collider of each yarn:
float aux = (*bc->zIni + *(bc-1)->zIni + *nextBC->zIni)/3;
float diff = aux - *bc->zIni;
*bc->zIni = aux;
Then we have to update all the colliders in contact with this recomputed collider:
void updateCollider(BoxCollider2D *bc, float diff){
int i ;
for(i = 0; i < bc->nCollisions; i++){
*bc->collisions[i]->zIni += diff;
}
}
This last step is messing up all the simulation because z's are accumulating theirselves ...
I want to know why this does not tend to converge as I expected and possible solutions for this problem.
EDIT2:
This is the algorithm that detects collisions:
For each collider in the grid position
Compare with others in the grid
if colliding: add the colliding collider to the current collider
For each yarn:
Go trhough all the colliders (linked list)
For each collider from a yarn:
Bubble sort collision yarns in function of their z order
Compute z based on yarns radius (centered on 0)
float computeZ(BoxCollider2D *array[], int nYarns, BoxCollider2D *bc){
int i, antBar;
float radiConflicte = 0;
float zMainYarn;
finger_st *pfin;
antBar = -1;
for(i = 0; i<nYarns; i++){
if(pfin != array[i]->pfin){
float radiFil = getRadiFromBC2D(*array[i]); // veure getCollider
radiConflicte += radiFil;
pfin = array[i]->pfin;
}
}
antBar = -1;
pfin = NULL;
for(i = 0; i<nYarns; i++){
float radiFil = getRadiFromBC2D(*array[i]); // veure getCollider
if(pfin != array[i]->pfin){
radiConflicte -= radiFil;
*(array[i]->zIni) = radiConflicte;
if(array[i] == bc) zMainYarn = *(array[i]->zIni);
radiConflicte -= radiFil;
pfin = array[i]->pfin;
}
else *(array[i]->zIni) = *(array[i-1]->zIni);
}
This leads in a approach where the last yarns processed can alter the firsts ones, and, in last instance, to glitches.
How I can avoid that?
Thanks a lot!

OpenCV Object Detection Memory Issue

The code I wrote below is objection detection using size-invariant template matching. This technique was detailed in the following site: https://www.google.com/url?hl=en&q=http://www.pyimagesearch.com/2015/01/26/multi-scale-template-matching-using-python-opencv/&source=gmail&ust=1489552541603000&usg=AFQjCNHSfgL_RTy-o5SMyCmWELFbsfOOTw
The function works fine. However, when I am running too many iterations in the while loop, OpenCV Error: Insufficient Memory will occur.
I can't seem to figure out why I'm running into this error as I'm releasing the the matrix data that I'm creating in cvCreateMat in every iteration. The memory error occurs after running the loop several times and in the function cvMatchTemplate. Am I missing another source of error? I am writing this code in C on the LCDK.
Function:
double minVal = 0.0;
double maxVal = 0.0 ;
CvPoint minLoc;
CvPoint maxLoc;
double ratio = 1.0;
CvMat* mask2 = NULL;
//for containing the maximum values
CvMat* resized_source;
CvMat* result; //result stored for every shapes
while (1){
// All templates are sized around the same
if(width_curr <= template->cols || height_curr <= template->rows)
break;
resized_source = cvCreateMat(height_curr,width_curr,CV_8UC1);
cvResize(source_close_edge_dist_8,resized_source,CV_INTER_LINEAR);
result = cvCreateMat((resized_source->rows)-(template->rows)+1, (resized_source->cols)-(template->cols)+1, CV_32FC1);
cvMatchTemplate(resized_source, template, result, CV_TM_CCOEFF);
//Detecting several objects
cvMinMaxLoc(result, &minVal, &maxVal, &minLoc, &maxLoc, mask2);
*(max_all+*size) = maxVal;
(max_all_point+*size)->x = maxLoc.x;
(max_all_point+*size)->y = maxLoc.y;
*(max_all_ratio+*size) = sqrt(ratio);
*size = *size + 1;
// move on to next resizing
ratio -= 0.04;
width_curr = sqrt(ratio)*width;
height_curr = sqrt(ratio)*height;
minVal = 0.0;
maxVal =0.0 ;
cvReleaseData(resized_source);
cvReleaseMat(&resized_source);
cvReleaseData(result);
cvReleaseMat(&result);
}

cairo draw_text and dangling pointer on itemized lines?

I try to use cairo to draw_text on screen but mac Os X instrument tells me that I have dangling pointer cause by pango_layout_get_pixel_size or pango_layout_get_baseline .. actuelly the first function which needs to have PangoLines and store them in Layout variable.
In every code show on the documentation and example you juste have to g_object_unref(layout) at the end ... but still ... those pointers allocate by itemized_state_init (inside pango_layout_get_pixel_size for ex) are not freed.
Am I doing something wrong ?
double posX = somevalue;
double posY = somevalue;
cairo_save (CurCairoState);
PangoLayout *layout = pango_cairo_create_layout (CurCairoState);
pango_layout_set_text (layout, text, -1);
pango_layout_set_font_description (layout, curCairoContext->font);
int width, height;
pango_layout_get_pixel_size (layout, &width, &height);
int b = (int)(pango_layout_get_baseline (layout) / PANGO_SCALE);
/* applying alignment attribute */
switch (curCairoContext->textAnchor) {
case GMiddle:
posX = posX - (width / 2.0);
break;
case GEnd:
posX = posX - width;
break;
}
cairo_move_to (CurCairoState, posX, posY - b);
pango_cairo_show_layout (CurCairoState, layout);
g_object_unref (layout);
cairo_restore (CurCairoState);
and the result of the allocation in Instrument
thanx to any reply !
EDIT:
here is the new screenshot after using G_SLICE=always-malloc G_DEBUG=gc-friendly.
Thoses dangling pointers became leaks for "instrument"
EDIT (no solution found) :
I downloaded the sources of pango-1.40.3, compiled it and installed it with markers of my own and I don't understand why I'have got those dangling pointers on MacOS because I'm going through : pango_layout_finalize and pango_layout_clear_lines and layout->ref_count=0 when I call g_obejct_unref (layout)

Resources