image processing technique called PRUNING - c

does anyone have any idea how to implement an image processing technique called PRUNING? any ideas, examples, etc.?
I'm working with OpenCV and C #, if anyone can help, I am grateful.

I assume you are looking to remove unwanted spurs and artifacts from images. Have you considered using Morphology based operations? You can consider thinning, hit-and-miss transform etc. This and this give a very basic explanation about morphology. Most morphology operations are implemented in OpenCV using MorphologyEx.

There is no special pruning function in OpenCV but one can use the prune function of PlantCV an adapted version for plants.
https://plantcv.readthedocs.io/en/latest/prune/

Related

Creating a converter for STP to OBJ

I'm trying to create an application that converts STEP(stp) files to OBJ files. The OBJ format is quite simple and creating a reader/writer for that wont be any problem but the STEP format is more complicated. Is it possible to make it myself without too much problem or do you have any other suggestions to approaching this?
FreeCAD can convert STEP to OBJ.
I ended up using FreeCAD after all. It cant convert with materials and the meshes gets worse and the files are bigger than converting from 3DS Max. Converting the OBJ files from FreeCAD and then to FBX(which was the goal format) ended up with a pretty good mesh. It still has no material but that is not important for me
Nowadays you can use free tool CAD Assistant (which is also based on Open CASCADE Technology as FreeCAD) for converting STEP files into OBJ with color information preserved.

ScatterPlotDemo-JfreeChart

Can any one please provide me the Source Code for "ScatterPlotDemo1" which comes with "JfreeChart DevloperGuide".I am developing exactly the same Application so it would be very helpful for me if I can get the code for the following Image attached.
Thanks very Much
http://www.jfree.org/jfreechart/images/ScatterPlotDemo1.png
Any one of these recent scatter plot demos would probably be a better starting point. The only difference is the demonstration dataset. ScatterPlotDemo1 is not hard to find, but the required SampleXYDataset2 is less than exemplary. I'd look at nextGaussian() to vary the slope of a line.
As you use the library frequently, I'd recommend The JFreeChart Developer Guide†.
†Disclaimer: Not affiliated with Object Refinery Limited; just a satisfied customer and very minor contributor.
http://code.google.com/p/uidesign/source/browse/trunk/UIDProject/src/testers/ScatterPlotDemo1.java?spec=svn19&r=19

Are there any well known algorithms to count steps based on the accelerometer?

I'm implementing an accelerometer-based pedometer, and I was wondering if there was any known algorithms to handle that.
You have probably found this:
Enhancing the Performance of Pedometers Using a Single Accelerometer
Anyhow, I am also interested in finding a good algorithm, I am curios what other answers you will get. :)
There is an app called Sensor data that you can uses to gather experimental data so you can then analyze it and try to find an algorithm.
Its going to be quite tricky to find a very good algorithm especially for the iPhone since its accelerometer is quite noisy
There's an interesting paper (with source code) here that may be of help: http://www.analog.com/static/imported-files/application_notes/47076299220991AN_900.pdf.
The charts are interesting. If I were to do this myself I would probably sample the data at a fairly high frequency, convert to frequency domain with a FFT, apply a digital band-pass filter to cut off all frequencies outside the expected minimum/maximum walking speeds (including any DC offset), do a reverse-FFT to reconstruct the now-filtered signal and then run the resulting data through an edge-detector with a Hysteresis function. This is all pure speculation of course but looking at those charts I think it would work, it would be relatively fast to code up and well within the processing power of a mobile phone.

What preprocessing image techniques should I take in consideration before applying OpenCV's Viola-Jones method for face detection?

I am working for a project at school regarding face detection, based on a technique described by Viola and Jones 2001/2004.
I've read that the OpenCV has an implementation of this algorithm, and it works very good.
I was wondering if you have any advices regarding what techniques (pre-processing) to apply to the images before testing the existence of a face (eg. histogram equalization) ?
I basically used the code from this sample program from the OpenCV page and it worked very well for my masters thesis project. If you get bad results or your lighting is strange you can try a histogram equalization.
with a friend I did something similar too for an university project, and especially on low resolution video sequences it really helped to upsample the frame, doubling its size. It was my friends' idea, who had previously taken an image processing class. Although equivalent, things like decreasing initial scan window size, horizontal and vertical steps didn't produce the same result. In other words it may be better to work on larger images with larger scan windows than on smaller with smaller scan windows. Don't know exactly why.
Bye ;-)
I know its too late. But do go through this site as well.
It speaks of the common pre-proccessing required for the images. Equalising the image, Editing irrelevant content etc

Duplicate image detection algorithms?

I am thinking about creating a database system for images where they are stored with compact signatures and then matched against a "query image" that could be a resized, cropped, brightened, rotated or a flipped version of the stored one. Note that I am not talking about image similarity algorithms but rather strictly about duplicate detection. This would make things a lot simpler. The system wouldn't care if two images have an elephant on them, it would only be important to detect if the two images are in fact the same image.
Histogram comparisons simply won't work for cropped query images. The only viable way to go I see is shape/edge detection. Images would first be somehow discretized, every pixel being converted to an 8-level grayscale for example. The discretized image will contain vast regions in the same colour which would help indicate shapes. These shapes then could be described with coefficients and their relative position could be remembered. Compact signatures would be produced out of that. This process will be carried out over each image being stored and over each query image when a comparison has to be performed. Does that sound like an efficient and realisable algorithm? To illustrate this idea:
removed dead ImageShack link
I know this is an immature research area, I have read Wikipedia on the subject and I would ask you to propose your ideas about such an algorithm.
SURF should do its job.
http://en.wikipedia.org/wiki/SURF
It is fast an robust, it is invariant on rotations and scaling and also on blure and contrast/lightning (but not so strongly).
There is example of automatic panorama stitching.
Check article on SIFT first
http://en.wikipedia.org/wiki/Scale-invariant_feature_transform
If you want to do a feature detection driven model, you could perhaps take the singular value decomposition of the images (you'd probably have to do a SVD for each color) and use the first few columns of the U and V matrices along with the corresponding singular values to judge how similar the images are.
Very similar to the SVD method is one called principle component analysis which I think will be easier to use to compare between images. The PCA method is pretty close to just taking the SVD and getting rid of the singular values by factoring them into the U and V matrices. If you follow the PCA path, you might also want to look into correspondence analysis. By the way, the PCA method was a common method used in the Netflix Prize for extracting features.
How about converting this python codes to C back?
Check out tineye.com They have a good system that's always improving. I'm sure you can find research papers from them on the subject.
The article you might be referring to on Wikipedia on feature detection.
If you are running on Intel/AMD processor, you could use the Intel Integrated Performance Primitives to get access to a library of image processing functions. Or beyond that, there is the OpenCV project, again another library of image processing functions for you. The advantage of a using library is that you can try various algorithms, already implemented, to see what will work for your situation.

Resources