How to use MSER to detect regions in images - c

I have created an application that extracts the MSER data and stores it in a CvSeq*. I was wondering if there were any functions, or tutorials, in OpenCV that I could use to compare the data with another image using the extracted data of both images.
Thanks.

The simplest implementation of MSER happens to be this one using the C API. There's another listing from the Google SoC here using the C++ API.
I guess your best way to compare results would be to implement the code in any of the above links. Comparing the results with Matlab is generally a good thing, as we can expect that to be a standard (more or less). VlFeat has a library with both C and Matlab interfaces that has MSER functions. The last link also has a brief explanation from where you might be able to understand which "data" to compare. What sort of comparison do you have in mind - if it's similarity between regions in two different images, then using a Gray level Co-occurrence Matrix (GLCM) of the regions should work. The MSER will give you the regions, but comparison may not require further data of MSER.
Did you use the OpenCV cvMSER() function btw, or code the entire thing?

Related

Is there an intermediate representation in PlantUML that can be used for further processing?

I'd like to use the PlantUML syntax to define component structures, which I want to process in an own tool. However, I'd like to avoid having to write a PlantUML parser. Is there some sort of intermediate representation in PlantUML, which I could use for that? It would be perfect to have e.g. a JSON structure which contains all diagram objects and relations among them in a concise way.
I could not find anything in the docs, maybe someone with more insights in the project can help?
As Jean-Marc Volle pointed out, the project github.com/jupe/puml2code allows to process puml files and generate source code in different languages using handlebar templates. Currently the code generation is limited to classes in a puml file.
I have used puml2code as a starting point for a new project github.com/robbito/puml2json, which simplifies the process a bit, as it doesn't require handlebar. Json ist directly generated from the PlantUML code. puml2json currenlty also only supports a subset of PlantUML.

What Simulink input blocks do I need to to use to build such a system?

I am relatively new to Matlab/Simulink, and so thankful to have such a forum to place my questions, which might be a touch trivial to you guys. Thanking you for your time, here you go.
This is in creating a Simulink model.
My input is a speed and a load, which is variable. And for each input point, I need to get a pressure curve as the output (Pressure vs Crank angle), the data for which I have and will need to feed in. What sort of blocks will I need to use, and how do I integrate both the arrays?
Second question, which would be for a next step. I have 4 such data point sets (speed v load). Is there a way I could interpolate them to the whole speed/load map?
Like I mentioned before, really a novice here, so any help would be highly appreciated. Thanking you guys again. \m/
Regards,
Anirudh
If you have variables containing your data in MATLAB workspace, the best way to get them into Simulink is to use "From workspace" block. See documentation at http://www.mathworks.com/help/simulink/slref/fromworkspace.html. If your data is fixed and does not change over time use a Constant block. (http://www.mathworks.com/help/simulink/slref/constant.html). "From workspace" block also allows you to interpolate missing data.
Write the mathematical equations that model your system on paper, and implement them with Simulink blocks (best to start from higher order derivatives and integrate, rather than the other way round).
You may need/want to use a 1-D Lookup Table if you have some non-linear relationship in your model. Any data/variable that is in your MATLAB workspace is accessible from your Simulink model, e.g. to parameterise the blocks in your model. As already mentioned, you can use the From Workspace block to use your data as input(s) to the model.
You might also want to check the Getting Started guide for Simulink.

how to find overlapping region between images in opencv?

I am trying to implement alpha blending with two images for image stitching .
My first image is this ->
here is my second image ->
here is my result image ->
As you can see the result is not proper.I think I first have to find the overlapping region between then and then implement alpha blending on the overlapping part.
First of all, have you seen a new "stitching" module introduced in OpenCV 2.3?
It provides a set of building blocks for stitching pipeline including blending and "finding an overlap" (e.g. registration) steps. Here is a documentation: http://docs.opencv.org/modules/stitching/doc/stitching.html and an example of stitching application: stitching_detailed.cpp
I recommend you to study the code of this sample for better understanding of the details.
Regarding the finding of overlap there are several common approaches in computer vision:
optical flow
template matching
feature matching
For your case I recommend the last one - it works very well on the photos. And this approach is already implemented in OpenCV - explore the OpenCv source and see how the cv::detail::BestOf2NearestMatcher works.
I think the most common approach is SIFT, find a few keypoints in both images, then warp them to get your result. See this
Here are explanations about SIFT and panorama stitching.

libxml2 writer differences

The bulk of the examples I can find for libxml2 are all about loading/parsing XML files. But I'm only interested in writing them; the code will never have to parse any files. There is an example using different writers, where it shows how to use the file, memory, DOM and tree models.
Looking through the code, I don't see any significant differences between them when it comes to writing. How does one decide which is better to use? (In other words, in what cases is one better than the others?)
The differences between the 4 functions you specify are minimal, it's all about where the contents go. As Alex mentioned, if memory is a concern, using xmlNewTextWriterFilename has the advantage of not needing to hold the result in memory.
The xmlWriter API, to which all the methods you mentioned belong, is one of the APIs offered. The other of note is the tree API. xmlWriter is more like calling write() to print to a file, and the tree is more like building nested structs in memory.
The tree-based versions can be good if your data is constructed in a non-linear fasion, going back and adding/changing things based on later information, etc. This would require some workarounds/caching with the streaming xmlWriter interface, as you can't change things once they've been output. The in-memory tree, however, can be fully tweaked until the instant it's serialized.
The tree API has the downside of the fact it has to keep the entire thing im memory; the rule of thumb is the memory requirements for a parsed tree is rougly 4x the size of serialized xml file.
My decision is usually dependent on whether I expect to create large documents. If not, I use the if the tree api, as the flexibility will be there if I want it. If I know efficiency will be a concern or I'll be working with large stuff, the streaming xmlWriter is the way to go.
tree API examples can be found here: http://xmlsoft.org/examples/index.html#Tree
If you're on a device with limited memory, you probably don't want to use DOM or memory-based approaches. In that case, you probably want to write out the file as you iterate through the data structure you want to write to XML.

Duplicate image detection algorithms?

I am thinking about creating a database system for images where they are stored with compact signatures and then matched against a "query image" that could be a resized, cropped, brightened, rotated or a flipped version of the stored one. Note that I am not talking about image similarity algorithms but rather strictly about duplicate detection. This would make things a lot simpler. The system wouldn't care if two images have an elephant on them, it would only be important to detect if the two images are in fact the same image.
Histogram comparisons simply won't work for cropped query images. The only viable way to go I see is shape/edge detection. Images would first be somehow discretized, every pixel being converted to an 8-level grayscale for example. The discretized image will contain vast regions in the same colour which would help indicate shapes. These shapes then could be described with coefficients and their relative position could be remembered. Compact signatures would be produced out of that. This process will be carried out over each image being stored and over each query image when a comparison has to be performed. Does that sound like an efficient and realisable algorithm? To illustrate this idea:
removed dead ImageShack link
I know this is an immature research area, I have read Wikipedia on the subject and I would ask you to propose your ideas about such an algorithm.
SURF should do its job.
http://en.wikipedia.org/wiki/SURF
It is fast an robust, it is invariant on rotations and scaling and also on blure and contrast/lightning (but not so strongly).
There is example of automatic panorama stitching.
Check article on SIFT first
http://en.wikipedia.org/wiki/Scale-invariant_feature_transform
If you want to do a feature detection driven model, you could perhaps take the singular value decomposition of the images (you'd probably have to do a SVD for each color) and use the first few columns of the U and V matrices along with the corresponding singular values to judge how similar the images are.
Very similar to the SVD method is one called principle component analysis which I think will be easier to use to compare between images. The PCA method is pretty close to just taking the SVD and getting rid of the singular values by factoring them into the U and V matrices. If you follow the PCA path, you might also want to look into correspondence analysis. By the way, the PCA method was a common method used in the Netflix Prize for extracting features.
How about converting this python codes to C back?
Check out tineye.com They have a good system that's always improving. I'm sure you can find research papers from them on the subject.
The article you might be referring to on Wikipedia on feature detection.
If you are running on Intel/AMD processor, you could use the Intel Integrated Performance Primitives to get access to a library of image processing functions. Or beyond that, there is the OpenCV project, again another library of image processing functions for you. The advantage of a using library is that you can try various algorithms, already implemented, to see what will work for your situation.

Resources