I want to draw a rectangle over feature points get from optical flow process.
I'm using cvSeqPush method to draw it:
CvSeq* trackCarContour = cvCreateSeq(CV_32FC2, sizeof(CvSeq), sizeof(CvPoint2D32f), contour_storage );
for(int i=0;i<numberOfFeaturesFound;i++)
{
cvSeqPush( trackCarContour, &cur_feature[i] );
}
CvRect trackCarRect = cvBoundingRect(trackCarContour, 0);
However, the rectangle drawn is not bound over those points, instead it shows on the top left corner of the image.
Is there any other way to do this? Or, what's wrong with my way of doing this?
Related
My linear algebra is weak. WPF is a great system for Rendering different transformations upon an image. However, the standard ScaleTransform will only scale the image's along the x-y axes. When the edges have first been rotated, the result of applying the ScaleTransform will result in a skewed transformation (as shown below) since the edges are no longer aligned.
So, if I have an image that has undergone several different transforms with the result being shown by the WPF rendering system, how do I calculate the correct matrix transform to take the (final rotated image) and scale it along the axes of the rendered image?
Any help or suggestions will be most appreciated.
TIA
(For the complete code, please see my previous question.)
Edit #1: To see the above effect:
Drop image onto Inkcavas. -- no skewing seen.
Rotate image counterclockwise (to about 45deg) -- no skewing seen.
Make the image larger (about twice its prescaled size -- no skewing seen.
Rotate the image clockwise (about back to where it started) -- skewing is
immediately seen during and after the rotation.
If step 3 is skipped, simple rotation -- no matter how many times done -- will not cause the skewing effect. Actually, this makes sense. The ScaleTransform preserves the distance from center from the edges of the image. If the image is at an angle, the x-y distance from the edges of the transform are no longer constant through the width and length of the rendered image. So the edges get appropriately scaled, but the angles are changed.
Here is the most relevant code:
private ImageResizing(Image image)
{
if (image == null)
throw new ArgumentNullException("image");
_image = image;
TransformGroup tg = new TransformGroup();
image.RenderTransformOrigin = new Point(0.5, 0.5); // All transforms will be based on the center of the rendered element.
tg.Children.Add(image.RenderTransform); // Keeps whatever transforms have already been applied.
image.RenderTransform = tg;
_adorner = new MyImageAdorner(image); // Create the adorner.
InstallAdorner(); // Get the Adorner Layer and add the Adorner.
}
Note: The image.RenderTransformOrigin = new Point(0.5, 0.5) is set to the center
of the rendered image. All transforms will be based on the center of the image at the time it is seem by the transform.
public MyImageAdorner(UIElement adornedElement)
: base(adornedElement)
{
visualChildren = new VisualCollection(this);
// Initialize the Movement and Rotation thumbs.
BuildAdornerRotate(ref moveHandle, Cursors.SizeAll);
BuildAdornerRotate(ref rotateHandle, Cursors.Hand);
// Add handlers for move and rotate.
moveHandle.DragDelta += new DragDeltaEventHandler(moveHandle_DragDelta);
moveHandle.DragCompleted += new DragCompletedEventHandler(moveHandle_DragCompleted);
rotateHandle.DragDelta += new DragDeltaEventHandler(rotateHandle_DragDelta);
rotateHandle.DragCompleted += new DragCompletedEventHandler(rotateHandle_DragCompleted);
// Initialize the Resizing (i.e., corner) thumbs with specialized cursors.
BuildAdornerCorner(ref topLeft, Cursors.SizeNWSE);
// Add handlers for resizing.
topLeft.DragDelta += new DragDeltaEventHandler(TopLeft_DragDelta);
topLeft.DragCompleted += TopLeft_DragCompleted;
// Put the outline border arround the image. The outline will be moved by the DragDelta's
BorderTheImage();
}
#region [Rotate]
/// <summary>
/// Rotate the Adorner Outline about its center point. The Outline rotation will be applied to the image
/// in the DragCompleted event.
/// </summary>
/// <param name="sender"></param>
/// <param name="e"></param>
void rotateHandle_DragDelta(object sender, DragDeltaEventArgs e)
{
// Get the position of the mouse relative to the Thumb. (All cooridnates in Render Space)
Point pos = Mouse.GetPosition(this);
// Render origin is set at center of the adorned element. (all coordinates are in rendering space).
double CenterX = AdornedElement.RenderSize.Width / 2;
double CenterY = AdornedElement.RenderSize.Height / 2;
double deltaX = pos.X - CenterX;
double deltaY = pos.Y - CenterY;
double angle;
if (deltaY.Equals(0))
{
if (!deltaX.Equals(0))
angle = 90;
else
return;
}
else
{
double tan = deltaX / deltaY;
angle = Math.Atan(tan); angle = angle * 180 / Math.PI;
}
// If the mouse crosses the vertical center,
// find the complementary angle.
if (deltaY > 0)
angle = 180 - Math.Abs(angle);
// Rotate left if the mouse moves left and right
// if the mouse moves right.
if (deltaX < 0)
angle = -Math.Abs(angle);
else
angle = Math.Abs(angle);
if (double.IsNaN(angle))
return;
// Apply the rotation to the outline. All Transforms are set to Render Center.
rotation.Angle = angle;
rotation.CenterX = CenterX;
rotation.CenterY = CenterY;
outline.RenderTransform = rotation;
}
/// Rotates image to the same angle as outline arround the render origin.
void rotateHandle_DragCompleted(object sender, DragCompletedEventArgs e)
{
// Get Rotation Angle from outline. All element rendering is set to rendering center.
RotateTransform _rt = outline.RenderTransform as RotateTransform;
// Add RotateTransform to the adorned element.
TransformGroup gT = AdornedElement.RenderTransform as TransformGroup;
RotateTransform rT = new RotateTransform(_rt.Angle);
gT.Children.Insert(0, rT);
AdornedElement.RenderTransform = gT;
outline.RenderTransform = Transform.Identity; // clear transform from outline.
}
#endregion //Rotate
#region [TopLeft Corner
// Top Left Corner is being dragged. Anchor is Bottom Right.
void TopLeft_DragDelta(object sender, DragDeltaEventArgs e)
{
ScaleTransform sT = new ScaleTransform(1 - e.HorizontalChange / outline.ActualWidth, 1 - e.VerticalChange / outline.ActualHeight,
outline.ActualWidth, outline.ActualHeight);
outline.RenderTransform = sT; // This will immediately show the new outline without changing the Image.
}
/// The resizing outline for the TopLeft is based on the bottom right-corner. The resizing transform for the
/// element, however, is based on the render origin being in the center. Therefore, the Scale transform
/// received from the outling must be recalculated to have the same effect--only from the rendering center.
///
/// TopLeft_DragCompleted resize the element rendering.
private void TopLeft_DragCompleted(object sender, DragCompletedEventArgs e)
{
// Get new scaling from the Outline.
ScaleTransform _sT = outline.RenderTransform as ScaleTransform;
scale.ScaleX *= _sT.ScaleX; scale.ScaleY *= _sT.ScaleY;
Point Center = new Point(AdornedElement.RenderSize.Width/2, AdornedElement.RenderSize.Height/2);
TransformGroup gT = AdornedElement.RenderTransform as TransformGroup;
ScaleTransform sT = new ScaleTransform( _sT.ScaleX, _sT.ScaleY, Center.X, Center.Y);
gT.Children.Insert(0, sT);
AdornedElement.RenderTransform = gT;
outline.RenderTransform = Transform.Identity; // Clear outline transforms. (Same as null).
}
#endregion
Note: I am adding each new transform to the first of the children list. This makes calculations on the image easier.
I could not find with Google or in text all the elements needed to answer this question completely. So, for all other newbies like my self, I will post this (very long) answer. (Editors and Gurus please feel free to correct).
A word toward setup. I have an inkcanvas onto which an image is dropped and added as a child of the inkcanvas. At the time of the drop, an adorner containing a Thumb on each corner for resizing, a Top-Middle thumb for rotating, and a middle thumb for translation is added for final positioning of the image. Along with a "outline" designed as a path element, the Thumbs and outline complete the Adorner and create a kind of wire frame around the adorned element.
There are multiple key points:
WPF first uses a layout pass to position elements within their parent container, followed by a rendering pass to arrange the element. Transforms can be applied to either or both the layout and rendering passes. However, it needs to be noted that the layout pass uses an x-y coordinate system with the origin on the top left of the parent, where as the rendering system inherently references the top left of the child element. If the layout position of the dropped element is not specifically defined, it will by default be added to the "origin" of the parent container.
The RenderTransform is by default a MatrixTransform but can be replaced by a TransformGroup. Using either or both of these allows for Matrices (in the MatrixTransform) or Transforms (in the TransformGroup) to be applied in any order. My preference was to use the MatrixTransforms to better see the relationship between scaling, rotation, and translation.
The rendering of the adorner follows the element it adorns. That is, the element's rendering will also be applied to the Adorner. This behavior can be overriden by use of
public override GeneralTransform GetDesiredTransform(GeneralTransform transform)
As noted in the initial question, I had avoided using SetTop() and SetLeft() as they messed up my other matrices. In hindsight, the reason my matrices failed was because SetTop() and SetLeft() apparently work during the layout phase--so all my coordinates for rendering were off. (I was using a TransalateTransform to position the image upon drag-drop.) However, using SetTop() and SetLeft() apparently act during the Layout Phase. Using this greatly simplified the calculations needed for the Rendering Phase since all coordinates could refer to the image without regard to the position on the canvas.
private void IC_Drop(object sender, DragEventArgs e)
{
InkCanvas ic = sender as InkCanvas;
// Setting InkCanvasEditingMode.None is necessary to capture DrawingLayer_MouseDown.
ic.EditingMode = InkCanvasEditingMode.None;
ImageInfo image_Info = e.Data.GetData(typeof(ImageInfo)) as ImageInfo;
if (image_Info != null)
{
// Display enlarged image on ImageLayer
// This is the expected format for the Uri:
// ImageLayer.Source = new BitmapImage(new Uri("/Images/Female - Front.png", UriKind.Relative));
// Source = new BitmapImage(image_Info.Uri);
Image image = new Image();
image.Width = image_Info.Width * 4;
// Stretch.Uniform keeps the Aspect Ratio but totally screws up resizing the image.
// Stretch.Fill allows for resizing the Image without keeping the Aspect Ratio.
image.Stretch = Stretch.Fill;
image.Source = new BitmapImage(image_Info.Uri);
// Position the drop. Note that SetLeft and SetTop are active during the Layout phase of the image drop and will
// be applied before the Image hits its Rendering stage.
Point position = e.GetPosition(ic);
InkCanvas.SetLeft(image, position.X);
InkCanvas.SetTop(image, position.Y);
ic.Children.Add(image);
ImageResizing imgResize = ImageResizing.Create(image);
}
}
Since I want to be able to resize the image from any direction, the image is set with Stretch.Fill. When Stretch.Uniform was used, the image appeared to first be resized then jump back to its initial size.
Since I am using MatrixTransform, the order of the Matrices is important. So when applying the Matrices, for my use
// Make new render transform. The Matrix order of multiplication is extremely important.
// Scaling should be done first, followed by (skewing), rotation and translation -- in
// that order.
MatrixTransform gT = new MatrixTransform
{
Matrix = sM * rM * tM
};
ele.RenderTransform = gT;
Scaling (sM), is performed before Rotation (rM). Translation is applied last. (C# does matrix multiplication from left to right).
In review of the matrices, it is apparent that the Rotation Matrix also involves skewing elements. (Which makes sense since apparently the RotationTransform is intended to keep the angles at the edges constant). Thus, the Rotation Matrix depends on the size of the image.
In my case, the reason scaling after rotation was causing skewing is because the Scaling transform multiplies the distance between points of the image and the x-y axes. So if the edge of the image is not of constant distance to the x-y axes, scaling will distort (i.e., skew) the image.
Putting this together, results in the following method to resize the image:
Action<Matrix, Vector> DragCompleted = (growthMatrix, v) =>
{
var ele = AdornedElement;
// Get the change vector. Transform (i.e, Rotate) change vector into x-y axes.
// The Horizontal and Vertical changes give the distance between the the current cursor position
// and the Thumb.
Matrix m = new Matrix();
m.Rotate(-AngleDeg);
Vector v1 = v * m;
// Calculate Growth Vector.
var gv = v1 * growthMatrix;
// Apply new scaling along the x-y axes to obtain the rendered size.
// Use the current Image size as the reference to calculate the new scaling factors.
var scaleX = sM.M11; var scaleY = sM.M22;
var W = ele.RenderSize.Width * scaleX; var H = ele.RenderSize.Height * scaleY;
var sx = 1 + gv.X/ W; var sy = 1 + gv.Y / H;
// Change ScalingTransform by applying the new scaling factors to the existing scaling transform.
// Do not add offsets to the scaling transform matrix as they will be included in future scalings.
// With RenderTransformOrigin set to the image center (0.5, 0.5), scalling occurs from the center out.
// Move the new center of the new resized image to its correct position such that the image's thumb stays
// underneath the cursor.
sM.Scale(sx, sy);
tM.Translate(v.X / 2, v.Y / 2);
// New render transform. The order of the transform's is extremely important.
MatrixTransform gT = new MatrixTransform
{
Matrix = sM * rM * tM
};
ele.RenderTransform = gT;
outline.RenderTransform = Transform.Identity; // clear this transform from the outline.
};
Just to be clear, my "Growth Matrix" is defined in such a manner as to result in "Positive" growth as the cursor is moved away from the center of the image. For Example, the TopLeft corner will "grow" the image when moved to the left and up. Hence
growth matrix = new Matrix(-1, 0, 0, -1, 0, 0) for top-left corner.
The last problem is that of correctly calculating the rotation center (i.e., I want to spin, not orbit). This becomes greatly simplified by using
// All transforms will be based on the center of the rendered element.
AdornedElement.RenderTransformOrigin = new Point(0.5, 0.5);
Lastly, since I am scaling from a corner, the center of the image needs to be translated to keep the corner underneath the cursor.
Sorry for the length of this answer, but there is much to cover (and learn :) ). Hope this helps somebody.
I have to draw a circle for several images. For each image to the radius of curvature is different with a constant center.
The problem is : no matter how big the circle is it shouldn't cross to upper half of the image. It's OK if it becomes invisible or only a part of it is visible in the lower half.
I am using OpenCV 2.4.4 in C lang.
The values for the circle is found by:
for(angle1 = 0; angle1<360; angle1++)
{
x [angle1]= r * sin(angle1) + axis_x;
y [angle1]= r * cos(angle1) + axis_y;
}
FYI:
cvCircle( img,center_circle, r,cvScalar( 0, 0, 255,0 ),2,8,0);
Draws circle in the entire image. Which I don't want to happen.
How can I do it? Rem: no part of the circle should appear in upper half of the image.
And the code should be in OpenCV's C lang.
In MALTAB is pretty easy. I only have to select the pixels and map them on the image.
I am new to OpenCV and operations like img->data.i/f/s/db[50] =50; is showing error.
A pretty naive approach is to create a copy of the upper half of image, draw the complete circle, and then copy back the upper half to original image. This may not be the best approach but it works. Here is how it can be achieved:
void drawCircleLowerHalf(IplImage* image, CvPoint center, int radius, CvScalar color, int thickness, int line_type, int shift)
{
CvRect roi = cvRect(0,0,image->width, image->height/2);
IplImage* upperHalf = cvCreateImage(cvSize(image->width, image->height/2), image->depth, image->nChannels);
cvSetImageROI(image, roi);
cvCopy(image,upperHalf);
cvResetImageROI(image);
cvCircle(image, center, radius, color, thickness, line_type, shift);
cvSetImageROI(image, roi);
cvCopy(upperHalf, image);
cvResetImageROI(image);
cvReleaseImage(&upperHalf);
}
Just call this function with the same arguments as of cvCircle.
I am drawing a polygon in a square window. When I resize the window, for instance by fullscreening, the aspect ratio is disturbed. From a reference I found one way of preserving the aspect ratio. Here is the code:
void reshape (int width, int height) {
float cx, halfWidth = width*0.5f;
float aspect = (float)width/(float)height;
glViewport (0, 0, (GLsizei) width, (GLsizei) height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(cx-halfWidth*aspect, cx+halfWidth*aspect, bottom, top, zNear, zFar);
glMatrixMode (GL_MODELVIEW);
}
Here, cx is the eye space center of the zNear plane in X. I request if someone could please explain how could I calculate this. I believe this should be the average of the initial first two arguments to glFrustum(). Am I right? Any help will be greatly appreciated.
It looks like what you want to do is maintain the field of view or angle of view when the aspect ratio changes. See the section titled 9.085 How can I make a call to glFrustum() that matches my call to gluPerspective()? of the OpenGL FAQ for details on how to do that. Here's the short version:
fov*0.5 = arctan ((top-bottom)*0.5 / near)
top = tan(fov*0.5) * near
bottom = -top
left = aspect * bottom
right = aspect * top
See the link for details.
The first two arguments are the X coordinates of the left and right clipping planes in eye space. Unless you are doing off-axis tricks (for example, to display uncentered projections across multiple monitors), left and right should have the same magnitude and opposite sign. Which would make your cx variable zero.
If you are having trouble understanding glFrustrum, you can always use gluPerspective instead, which has a somewhat simplified interface.
i am trying to rotate elements on a canvas and the save their rotated (not original) positions to a file. I implemented a custom UIElement control to display a custom graphic, however when the graphic is rotated on the screen it is rotated correctly (no problem there) however when i obtain the position of the element using GetValue(Canvas.LeftProperty) and GetValue(Canvas.TopProperty), the X, Y coordinates and the angle of the element is of position of the original image before rotation.
I am learning WPF to finish a project for school and thus my knowledge of the technology is not as vast as i would like but if anyone can help me i would greatly appreciate it, thank you.
this is the implementation of the code that i have:
CustomObject m;
List<CustomObject> co = new List<CustomObject>();
foreach (var child in canvas1.Children){
m = child as CustomObject;
if (m != null && m.IsEnabled && m.IsVisible){
SaveStructure m1 = new SaveStructure();
<b>m1.Angle = Convert.ToSingle(ToRadians(m.Angle));</b>
<b>m1.X = Convert.ToInt32(m.GetValue(Canvas.LeftProperty));</b>
<b>m1.Y = Convert.ToInt32(m.GetValue(Canvas.TopProperty));</b>
co.Add(m1);
}
}
Note: All i want to know is how to get the position of the rotated element on the canvas, because i keep obtaining the original (unrotated) position.
The position you get is still the same because the object was not moved, it was just rotated, if you want to get the bounds of the rotated object that is something different than getting its position. You could do that by getting the corner point coordinates of the elements (Canvas.GetLeft(m), Canvas.GetTop(m), Canvas.GetLeft(m) + m.Width, Canvas.GetTop(m) + m.Height) and rotate them using the RotateTransform's Transform(Point p) method, then extract the bounds from those rotated points.
I have a cube rendering inside a Viewport3D and i need to know a way to find out whether ALL of the cube is visible to the user.
Edit:Just to be clear,..I'm not talking about clipping because of the near/far plane distance here. I mean the cube is to tall or wide to fit in the cameras field of view.
Any help would be massively appreciated!
Thanks in advance.
I can't offer a solution but I can, perhaps, point you in the right direction.
What you need to get hold of is the extent of the 2D projection of the cube on the view plane. You can then do a simple check on the min and max X & Y values to see whether the whole of the cube is visible.
Adding a tolerance factor to the extent will take care of any rounding errors.
EDIT: I have just done a Google search for "2D projection WPF" and this link came up. It looks like it addresses what you want.
FURTHER EDIT: I've copied the relevant section of code from the above link here.
public static Rect Get2DBoundingBox(ModelVisual3D mv3d)
{
bool bOK;
Matrix3D m = MathUtils.TryWorldToViewportTransform(vpv, out bOK);
bool bFirst = true;
Rect r = new Rect();
if (mv3d.Content is GeometryModel3D)
{
GeometryModel3D gm3d = (GeometryModel3D) mv3d.Content;
if (gm3d.Geometry is MeshGeometry3D)
{
MeshGeometry3D mg3d = (MeshGeometry3D)gm3d.Geometry;
foreach (Point3D p3d in mg3d.Positions)
{
Point3D pb = m.Transform(p3d);
Point p2d = new Point(pb.X, pb.Y);
if (bFirst)
{
r = new Rect(p2d, new Size(1, 1));
bFirst = false;
}
else
{
r.Union(p2d);
}
}
}
}
return r;
}
I remember a tutorial on frustum culling on flipcode.
Flipcode - Frustum Culling
I hope it helps.
I can think of doing something similar to this:
Check the nearest point of the cube related to the camera and check if it's being clipped from the near clipping plane.
The nearest point from the camera I can think of is one of this points composing the cube. So you have to check each of the 6 points of the cube and check if they are being clipped. If none of them are, then your cube if fully visible
Oh, and obviously you have to check against the far clipping plane too.
The code is simple:
for each point of cube do
if !(point is in farClippingPlane and nearClippingPlane)
return false;
end if
end for
return true