I am trying to implement alpha blending with two images for image stitching .
My first image is this ->
here is my second image ->
here is my result image ->
As you can see the result is not proper.I think I first have to find the overlapping region between then and then implement alpha blending on the overlapping part.
First of all, have you seen a new "stitching" module introduced in OpenCV 2.3?
It provides a set of building blocks for stitching pipeline including blending and "finding an overlap" (e.g. registration) steps. Here is a documentation: http://docs.opencv.org/modules/stitching/doc/stitching.html and an example of stitching application: stitching_detailed.cpp
I recommend you to study the code of this sample for better understanding of the details.
Regarding the finding of overlap there are several common approaches in computer vision:
optical flow
template matching
feature matching
For your case I recommend the last one - it works very well on the photos. And this approach is already implemented in OpenCV - explore the OpenCv source and see how the cv::detail::BestOf2NearestMatcher works.
I think the most common approach is SIFT, find a few keypoints in both images, then warp them to get your result. See this
Here are explanations about SIFT and panorama stitching.
Related
I am looking at building an application using gstreamer but first have some questions regarding its capabilities with respect to a desired use case.
Say I wanted to build a pipeline that processes video data in a similar way as depicted below.
Videosrc -> Facedetect -> Crop -> Videosink
What is the canonical method for taking metadata produced on each frame by a given video filter (i.e. the bounding box from a facial detection filter) and passing it to succeeding filter to operate on (i.e. the Crop filter cropping each image on the bounding box provided by Facedetect).
I know there are properties and dynamic properties, but as far as I can tell from docs, those both require an idea of what you want to happen when you construct the pipeline.
I also know that you can attach metadata to the GstBuffer object which could potentially be used, but there would need to be an agreed upon interface in that case which doesn't seem very portable and may lack support across many elements with the same capabilities.
Since I'm a beginner to SimpleCV, can somebody please guide me with the following application: The thing is that I'm working on a stereo project. I have two images, from left and right eyes.
First: I must display them side by side. (After using features and keypoints drawing it is able to show two images side by side but how can i do this manually?)
Second: I will track any mouse click event on any of these images. Then extract the point of click event and mark its location on the other image after a sift detection. (Since left and right views have an intersection, clicked pixel is most likely to be on the other with a little offset/shift). I may use sift features or any other similar method offered in SimpleCV. But by default features use SURF algorithm for detection. How can I switch to sift algorithm and use it? should I create a features object somewhere?
Thanks in advance.
To show two images side by side you can use
img1.sideBySide(img2)
For more information about it, start the SimpleCV shell,
$ simplecv
SimpleCV:1> help(Image.sideBySide)
This will show you the complete docs of sideBySide function.
KeyPoints:
You can use any of the following algorithms for keyPoints.
SURF
STAR
FAST
MSER
ORB
SIFT
img.findKeyPoints(flavour="SIFT")
Again for more info, just use help(Image.findKeyPoints) in SimpleCV shell.
I have created an application that extracts the MSER data and stores it in a CvSeq*. I was wondering if there were any functions, or tutorials, in OpenCV that I could use to compare the data with another image using the extracted data of both images.
Thanks.
The simplest implementation of MSER happens to be this one using the C API. There's another listing from the Google SoC here using the C++ API.
I guess your best way to compare results would be to implement the code in any of the above links. Comparing the results with Matlab is generally a good thing, as we can expect that to be a standard (more or less). VlFeat has a library with both C and Matlab interfaces that has MSER functions. The last link also has a brief explanation from where you might be able to understand which "data" to compare. What sort of comparison do you have in mind - if it's similarity between regions in two different images, then using a Gray level Co-occurrence Matrix (GLCM) of the regions should work. The MSER will give you the regions, but comparison may not require further data of MSER.
Did you use the OpenCV cvMSER() function btw, or code the entire thing?
I am working for a project at school regarding face detection, based on a technique described by Viola and Jones 2001/2004.
I've read that the OpenCV has an implementation of this algorithm, and it works very good.
I was wondering if you have any advices regarding what techniques (pre-processing) to apply to the images before testing the existence of a face (eg. histogram equalization) ?
I basically used the code from this sample program from the OpenCV page and it worked very well for my masters thesis project. If you get bad results or your lighting is strange you can try a histogram equalization.
with a friend I did something similar too for an university project, and especially on low resolution video sequences it really helped to upsample the frame, doubling its size. It was my friends' idea, who had previously taken an image processing class. Although equivalent, things like decreasing initial scan window size, horizontal and vertical steps didn't produce the same result. In other words it may be better to work on larger images with larger scan windows than on smaller with smaller scan windows. Don't know exactly why.
Bye ;-)
I know its too late. But do go through this site as well.
It speaks of the common pre-proccessing required for the images. Equalising the image, Editing irrelevant content etc
I am thinking about creating a database system for images where they are stored with compact signatures and then matched against a "query image" that could be a resized, cropped, brightened, rotated or a flipped version of the stored one. Note that I am not talking about image similarity algorithms but rather strictly about duplicate detection. This would make things a lot simpler. The system wouldn't care if two images have an elephant on them, it would only be important to detect if the two images are in fact the same image.
Histogram comparisons simply won't work for cropped query images. The only viable way to go I see is shape/edge detection. Images would first be somehow discretized, every pixel being converted to an 8-level grayscale for example. The discretized image will contain vast regions in the same colour which would help indicate shapes. These shapes then could be described with coefficients and their relative position could be remembered. Compact signatures would be produced out of that. This process will be carried out over each image being stored and over each query image when a comparison has to be performed. Does that sound like an efficient and realisable algorithm? To illustrate this idea:
removed dead ImageShack link
I know this is an immature research area, I have read Wikipedia on the subject and I would ask you to propose your ideas about such an algorithm.
SURF should do its job.
http://en.wikipedia.org/wiki/SURF
It is fast an robust, it is invariant on rotations and scaling and also on blure and contrast/lightning (but not so strongly).
There is example of automatic panorama stitching.
Check article on SIFT first
http://en.wikipedia.org/wiki/Scale-invariant_feature_transform
If you want to do a feature detection driven model, you could perhaps take the singular value decomposition of the images (you'd probably have to do a SVD for each color) and use the first few columns of the U and V matrices along with the corresponding singular values to judge how similar the images are.
Very similar to the SVD method is one called principle component analysis which I think will be easier to use to compare between images. The PCA method is pretty close to just taking the SVD and getting rid of the singular values by factoring them into the U and V matrices. If you follow the PCA path, you might also want to look into correspondence analysis. By the way, the PCA method was a common method used in the Netflix Prize for extracting features.
How about converting this python codes to C back?
Check out tineye.com They have a good system that's always improving. I'm sure you can find research papers from them on the subject.
The article you might be referring to on Wikipedia on feature detection.
If you are running on Intel/AMD processor, you could use the Intel Integrated Performance Primitives to get access to a library of image processing functions. Or beyond that, there is the OpenCV project, again another library of image processing functions for you. The advantage of a using library is that you can try various algorithms, already implemented, to see what will work for your situation.