Detect shapes using Open CV - c

I have been trying to detect shapes in an image and also arrive at a count as to how many such shapes are present in an image, for example a plus sign. Are there any built in functions to detect such shapes ? IF any please let me know.
Thank you.

You need to find all contours in an image and then filter them.
We know that the plus sign has 12 corners. So you need to filter all contours that have 12 corners. Of course, sometimes this can give you some unwanted objects. So you can filter again those contours that have angles between 2 lines(3 corners) max 0.3 cos for example.
Take a look at squares.cpp in samples directory of OpenCV. It finds all contours with 4 corners and angles max. 0.3 cos. So pretty much all squares.

You can also take a look at the Hough transform.

One way to detect shapes is to make use of cvBlobsLib.
A library to perform binary images connected component labelling
(similar to regionprops Matlab function). It also provides functions
to manipulate, filter and extract results from the extracted blobs,
see features section for more information.
For an example, see:
https://www.youtube.com/watch?v=Y8Azb_upcIQ
An alternative is to make use of EmguCV
Emgu CV is a cross platform .Net wrapper to the OpenCV image
processing library. Allowing OpenCV functions to be called from .NET
compatible languages such as C#, VB, VC++, IronPython etc. The wrapper
can be compiled in Mono and run on Windows, Linux, Mac OS X, iPhone,
iPad and Android devices.

Related

Produce bounding box from contour locations

I am new to OpenCV so I apologize if I use incorrect terminology. I am writing a program in C that finds objects in an image (in this case red building blocks) and extracts that part of the image and displays it as a new image. I have thresholded the image to remove everything but red and used cvDilate to blur the results slightly to make the object more distinct. I then used the OpenCV Contour finding and drawing functions to locate and draw the blocks.
How can I access the contour locations stored as CvSeq* and take the upper-most and lower-most contour values from a cluster of contours (there may still be some noise from other red objects) so that I can make a bounding box around it?
Thanks
Actually, you don't have to do this manually because OpenCV provides this type of functionality for you.
Look at the cvMinAreaRect2 and cvBoundingRect. Here are their examples respectively: minarea.c (has some debugging stuff, but should give you the gist of how to use it) and generalContours_demo1.cpp (in C++, but should be easy to translate).
As a side note, I would definitely suggest using the C++ API of OpenCV as it is a bit easier to understand and has more features. Also, you spend a lot less time/code worrying about memory management since the Mat class handles that for you.
Hope that helps!

Simplest way to extend GUI functionality in OpenCV 1.1 on Windows?

I have a large real time computer vision project in C with a gui that uses OpenCV 1.1's built-in HighGUI library. As others have pointed out, the OpenCV GUI library is very limited.
I'd like to make a slider bar (trackbar) GUI element like cvCreateTrackbar that can have values that go either negative or positive. OpenCV currently limits trackbars to positive integer values only. I don't need anything else fancy, just a sliderbar that can go negative.
What is the easiest way to get a slider bar that goes positive and negative?
I am on Windows XP using mingw and OpenCV 1.1. Ideally any solution should require minimum dependencies or libraries, and should play nice with Windows and mingw.
You could write a wrapper around the progress bar class that normalizes your values to the range of the progress bar. For example, if your range is -5 to 5, inclusive, add 5 to the value before sending to the progress widget. The "+5" adjusts the range from 0 to 10.
You may want to consider using a different widget as most definitions of progress measurements don't go negative. (Is your application actually making negative progress?) Also, most progress widgets allow for a positive increment, different than an absolute value. As the application runs, it adds an increment to the widget.
"That's just my opinion, I could be wrong." -- Dennis Miller.
[zGUI][1]https://github.com/zetapark/zGUI
I just uploaded a opencv gui toolkit. Please take a look..
This is solely dependent on opencv.
Event driven..

Mixing layers in OpenCV

i need to make a program where i have to detect the edge of a subimage (like a face in a portrait) using canny detector. then i need to filter that portion out and paste it in another background. it is like mixing 2 layers. can anybody give me any algorithm for this? or any idea about the process?
You are probably aware that the task of selecting a subimage is most known Region of Interest (ROI).
Edge detection with canny shouldn't be a problem since OpenCV implements it as cvCanny().
For what I understand you want to overlap two images. I suppose you want to add one image on top of each other? Take a look at step 2 on the first link I suggest: Adding Two Images with Different Size
If you want to BLEND them, then check these instructions. I have used them before to draw over the webcam window.

Recreating <BevelBitmapEffect> in a Pixel Shader/Other Method in WPF

Now that <BevelBitmapEffect> (amongst other effects) has been depreciated, I'm looking to see how I could re-create the exact same thing in a Shader Effect (including it's properties of BevelWidth, EdgeProfile, LightAngle, Relief and Smoothness).
I'm somewhat familar with pixel shading, mostly just colors manipulation of the whole image/element in Shazzam, but how to create a bevel elludes me. Is this a vertex shader and if so, how would I get started? I have searched high and low on this but can't seem to find an inkling of information that would allow me to get started in reproducing <BevelBitmapEffect> in a custom Effect.
Or, based on a comment below, is this 3D in WPF and if so, are there code libraries out there for recreating a <BevelBitmapEffect> that mimics the one that came with previous versions of WPF?
To create the bevel you need to know the distance from the edge for each pixel (search in all directions until alpha=0). From this you can calculate the normal then shade it (see silverlight example). As you mentioned there isn't much content about bevels but there are some good resources if you search for bump mapping/normal mapping to which the shading is similar. In particular this thread has a Silverlight example using a pre-calculated normal map.
To do everything in hardware ideally you would use a multipass shader, WPF's built-in effects are multipass but it doesn't allow you to write your own.
To workaround this limitation:
You could create multiple shaders and nest your element in multiple controls applying a different effect to each one.
Target WPF 4.0 and use Pixel Shader 3.0, for the increased instruction count. Although this may be a too high a hardware requirement and there is no software fallback for PS 3.0
Do some or all of the steps in software.
Without doing one of these you'd be lucky to do a 3 or 4 pixel bevel before you reach the instruction limit as the loops needed to find the distance increase the instruction count quickly.
New Sample
Download. Here is an example that uses PixelShader 3.0. It uses one shader to find the distance (aka height) to the edge, another (based on the nvidia phong shaders) is used to shade it. Bevel profiles are created by adjusting input height either with code or a custom profile can be used by supplying a special texture. There are some other features to add but it seems easily performant enough to animate the properties. Its lacking in comments but I can explain parts if needed.
There's a great article by Rod Stephens on DevX that shows how to use System.Drawing to create the WPF effects (the ones that used to exist, such as Bevel) and more. You've gotta register to view the article though, it's at http://www.devx.com/DevXNet/Article/45039. Downloadable source code too.

How to overlay text or markers on an bmp image

I'm working with an image processing project where I'm trying to locate features on a .bmp image. I'm writing the whole source code in C.
The algorithm I'm developing is going to search for some features, if a desired feature was found by the algorithm then it is going to create a point (x co-ord, y co-ord), now I want to overlay this point on the image with a green or red DOT.
As of now its only a point, later on I wish to draw a box around a group of features- for example a face.
I don't know how to do this, I'm developing this in Linux (Ubuntu 9.04) environment, can anyone suggest what I should do?
Vikram
Take a look at ImageMagick as well. I've used it in the past with Perl, but it has a C interface as well.
ImageMagick® is a software suite to create, edit, and compose bitmap images. It can read, convert and write images in a variety of formats (over 100) including DPX, EXR, GIF, JPEG, JPEG-2000, PDF, PhotoCD, PNG, Postscript, SVG, and TIFF. Use ImageMagick to translate, flip, mirror, rotate, scale, shear and transform images, adjust image colors, apply various special effects, or draw text, lines, polygons, ellipses and Bézier curves.
I would recommend using Cairo for your drawing. What you can do is load the image into an Image Surface, do your processing on the image surface using direct pixel access, and then use a Cairo context to draw what you need. The library also supports text using libpango, and Ubuntu loves the use of Cairo since GTK uses it. There are many tutorials for Cairo as well if you search around. The main site has some already.

Resources