About finding pupil in a video - c

I am now working on an eye tracking project. In this project I am tracking eyes in a webcam video (resolution if 640X480).
I can locate and track the eye in every frame, but I need to locate the pupil. I read a lot of papers and most of them refer to Alan Yuille's deformable template method to extract and track the eye features. Can anyone help me with the code of this method in any languages (matlab/OpenCV)?
I have tried with different thresholds, but due to the low resolution in the eye regions, it does not work very well. I will really appreciate any kind of help regarding finding pupil or even iris in the video.

What you need to do is to convert your webcam to a Near-Infrared Cam. There are plenty of tutorials online for that. Try this.
A Image taken from an NIR cam will look something like this -
You can use OpenCV then to threshold.
Then use the Erode function.
After this fill the image with some color takeing a corner as the seed point.
Eliminate the holes and invert the image.
Use the distance transform to the nearest non-zero value.
Find the max-value's coordinate and draw a circle.

If you're still working on this, check out my OptimEyes project: https://github.com/LukeAllen/optimeyes
It uses Python with OpenCV, and works fairly well with images from a 640x480 webcam. You can check out the "Theory Paper" and demo video on that page also. (It was a class project at Stanford earlier this year; it's not very polished but we made some attempts to comment the code.)

Depending on the application for tracking the pupil I would find a bounding box for the eyes and then find the darkest pixel within that box.
Some psuedocode:
box left_location = findlefteye()
box right_location = findrighteye()
image_matrix left = image[left_location]
image_matrix right = image[right_location]
image_matrix average = left + right
pixel min = min(average)
pixel left_pupil = left_location.corner + min
pixel right_pupil = right_location.corner + min

In the first answer suggested by Anirudth...
Just apply the HoughCirles function after thresholding function (2nd step).
Then you can directly draw the circles around the pupil and using radius(r) and center of eye(x,y) you can easily find out the Center of Eye..

Related

How to detect text region in image?

Given an image (i.e. newspaper, scanned newspaper, magazine etc), how do I detect the region containing text? I only need to know the region and remove it, don't need to do text recognition.
The purpose is I want to remove these text areas so that it will speed up my feature extraction procedure as these text areas are meaningless for my application. Anyone know how to do this?
BTW, it will be good if this can be done in Matlab!
Best!
You can use Stroke Width Transform (SWT) to highlight text regions.
Using my mex implementation posted here, you can
img = imread('http://i.stack.imgur.com/Eyepc.jpg');
[swt swtcc] = SWT( img, 0, 10 );
Playing with internal parameters of the edge-map extraction and image filtering in SWT.m can help you tweak the resulting mask to your needs.
To get this result:
I used these parameters for the edge map computation in SWT.m:
edgeMap = single( edge( img, 'canny', [0.05 0.25] ) );
Text detection in natural images is an active area of research in computer vision community. U can refer to ICDAR papers. But in your case I think it should be simple enough. As you have text from newspaper or magazines, it should be of fixed size and horizontally oriented.
So, you can apply scanning window of a fixed size, say 32x32. Train it on ICDAR 2003 training dataset for positive windows having text in it. U can use a small feature set of color and gradients and train an SVM which would give a positive or negative result for a window having text or not.
For reference go to http://crypto.stanford.edu/~dwu4/ICDAR2011.pdf . For code, you can try their homepages
This example in the Computer Vision System Toolbox in Matlab shows how to detect text using MSER regions.
If your image is well binarized and you know the usual size of the text you could use the HorizontalRunLengthSmoothing and VerticalRunLengthSmoothing algorithms. They are implemented in the open source library Aforge.Net but it should be easy to reimplement them in Matlab.
The intersection of the result image from these algorithm will give you a good indication that the region contains text, it is not perfect but it is fast.

PostGIS's st_overlaps method is only returning results overlapping the LinearRing which makes up the exterior of the polygon I'm searching under

I'm using PostGIS on ruby/rails, and have created a simple box-like polygon under which I wish to search for land parcels in a county. The st_overlaps tool has worked for this before and it has worked this time, sort of.
So I created the polygon to search for parcels (multi-polygons, as it turns out) underneath it
factory = RGeo::Cartesian.factory
coords = [[1554780, 1101102], [1561921, 1062647], [1634713, 1097531], [1630867, 1140657]]
points = coords.map { |pair| RGeo::WKRep::WKTParser.new.parse("POINT (#{pair.first} #{pair.last})") }
ring = factory.linear_ring(points)
polygon = factory.polygon(ring)
After running the active record call:
Parcel.where{st_overlaps(:parcel_multipolygon, polygon)}
I get 157 results. Far less than expected. I exported them a kml file using a custom script of mine. I will upload it soon for viewing.
What you'll see in that kml once loaded in Google Earth, is a parallelogram of pins marking parcels whose areas (polygons) are clearly saddling the outer ring of the parameter-polygon I created to search under. There are so many parcels along these invisible lines in such a clear, distinct shape, the fact that there are no pins in the middle of the shape clearly indicate that the search results were only at the overlappings of parcel multipolygons with the exterior edges (LinearRing) of the search polygon.
Based on my re-reading of the documentation for st_overlaps, I'm left puzzled as to what seems to be the problem here.
Here's a link to view the kmz export. (coordinates converted to geographic before export). You can view it in your browser. The search-polygon itself is not included, but its easy to see where its exterior ring is
https://docs.google.com/file/d/0B5inC0VAuhH1TXdTbWQ2RngxZk0/edit?usp=sharing
I think it is behaving as expected. St_overlaps will give features that actually lie on top of each other. If you want all features inside the polygon try ST_Intersects.

ios 6 MapKit annotation rotation

Our app has a rotating map view which aligns with the compass heading. We counter-rotate the annotations so that their callouts remain horizontal for reading. This works fine on iOS5 devices but is broken on iOS6 (problem seen with same binary as used on iOS5 device and with binary built with iOS6 SDK). The annotations initially rotate to the correct horizontal position and then a short time later revert to the un-corrected rotation. We cannot see any events that are causing this. This is the code snippet we are using in - (MKAnnotationView *)mapView:(MKMapView *)theMapView viewForAnnotation:(id )annotation
CATransform3D transformZ = CATransform3DIdentity;
transformZ = CATransform3DRotate(transformZ, _rotationZ, 0, 0, 1);
annotation.myView.layer.transform = transformZ;
Anyone else seen this and anyone got any suggestions on how to fix it on iOS6?
I had an identical problem so my workaround may work for you. I've also submitted a bug to Apple on it. For me, every time the map got panned by the user the Annotations would get "unrotated".
In my code I set the rotations using CGAffineTransformMakeRotation and I don't set it in viewForAnnotation but whenever the users location get's updated. So that is a bit different than you.
My workaround was to add an additional minor rotation at the bottom of my viewForAnnotation method.
if(is6orMore) {
[annView setTransform:CGAffineTransformMakeRotation(.001)]; //iOS6 BUG WORKAROUND !!!!!!!
}
So for you, I'm not sure if that works, since you are rotating differently and doing it in viewForAnnotation. But give it a try.
Took me forever to find and I just happened across this fix.

Image-processing basics

I have to do some image processing but I don't know where to start. My problem is as follows :-
I have a 2D fiber image (attached with this post), in which the fiber edges are denoted by white color and the inside of the fiber is black. I want to choose any black pixel inside the fiber, and travel from it along the length of the fiber. This will involve comparing the contrast with the surrounding pixels and then travelling in the desired direction. My main aim is to find the length of the fiber
So can someone please tell me atleast where to start? I have made a rough algorithm in my mind on how to approach my problem but I don't know even which software/library to use.
Regards
Adi
EDIT1 - Instead of OpenCV, I started using MATLAB since I found it much easier. I applied the Hough Transform and then Houghpeaks function with max no. of peaks = 100 so that all fibers are included. After that I got the following image. How do I find the length now?
EDIT2 - I found a research article on how to calculate length using Hough Transform but I'm not able to implement it in MATLAB. Someone please help
If your images are all as clean as the one you posted, it's quite an easy problem.
The very first technique I'd try is using a Hough Transform to estimate the line parameters, and there is a good implementation of the algorithm in OpenCV. After you have them, you can estimate their length any way you want, based on whatever other constraints you have.
Problem is two-fold as I see it:
1) locate start and end point from your starting position.
2) decide length between start and end points
Since I don't know your input data I assume it's pixel data with a 0..1 data on each pixel representing it's "whiteness".
In order to find end points I would do some kind of WALKER/AI that tries to walk in different locations, knowing original pos and last traversed direction then continuing along that route until "forward arc" is all white. This assumes fiber is somewhat straight (is it?).
Once you got start and end points you can input these into a a* path finding algorithm and give black pixels a low value and white very high. Then find shortest distance between start and end point, that is the length of the fiber.
Kinda hard to give more detail since I have no idea what techniques you gonna use and some example input data.
Assumptions:
-This image can be considered a binary image where there are only 0s(black) and 1s(white).
-all the fibers are straight and their starting and ending points are on borders.
-we can come up with a limit for thickness in fiber(thickness of white lines).
Under these assumptions:
start scanning the image border(start from wherever you want in whichever direction you want...just be consistent) until you encounter with the first white pixel.At this point your program will understand that this is definitely a starting point. By knowing this, you will gather all the white pixels until you reach a certain limit(or a threshold). The idea here is, if there is a fiber,you will get the angle between the fiber and the border the starting point is on...of course the more pixels you get(the inner you get)the surer you will be in the end. This is the trickiest part. after somehow ending up with a line...you need to calculate the angle(basic trigonometry). Since you know the starting point, the width/height of the image and the angle(or cos/sin of those) you will have the exact coordinate of the end point. Be advised...the exactness here is not really what you might have understood because we may(the thing is we will) have calculation errors in cos/sin values. So you need to hold the threshold as long as possible. So your end point will not be a point actually but rather an area indicating possibility that the ending point is somewhere inside that area. The rest is just simple maths.
Obviously you can put too much detail in this method like checking the both white lines that makes the fiber and deciding which one is longer or you can allow some margin for error since those lines will not be straight properly...this is where a conceptual thickness comes to the stage etc.
Programming:
C# has nice stuff and easy for you to use...I'll put some code here...
newBitmap = new Bitmap(openFileDialog1.FileName);
for (int x = 0; x < newBitmap.Width; x++)
{
for (int y = 0; y < newBitmap.Height; y++)
{
Color originalColor = newBitmap.GetPixel(x, y);//gets the pixel value...
//things go here...
}
}
you'll get the image from a openfiledialog and bitmap the image. inside the nested for loop this code scans the image left-to-right however you can change this...
Since you know C++ and C, I would recommend OpenCV
. It is open-source so if you don't trust anyone like me, you won't have a problem ;). Also if you want to use C# like #VictorS. Mentioned I would use EmguCV which is the C# equivilant of OpenCV. Tutorials for OpenCV are included and for EmguCV can be found on their website. Hope this helps!
Download and install the latest version of 3Dslicer,
Load your data and go the the package>EM segmenter without Atlas>
Choose your anatomical tree in 2 different labels, the back one which is your purpose, the white edges.
The choose the whole 2D image as your ROI and click on segment.
Here is the result, I labeled the edges in green and the black area in white
You can modify your tree and change the structures you define.
You can give more samples to your segmentation to make it more accurate.

Strange OpenCV Distance Transform Results

I'm trying to run a distance transform on a thresholded binary image in
order to assist anomaly detection (my hope is that I can detect large
changes around the edges of the object), however for some reason, upon
running my Distance Transform script, I'm getting a strange banding type of
effect. I tested something similar in the Distance Transform demo script in
the samples directory, with the same results. One possible reason I came up
with was that the distance was going beyond the 0-255 scale and therefore
essentially being modulus'ed to keep it within the boundaries. Has anyone
had any experience with this that could advise?
I have posted images and code on my blog if that helps
Thanks in advance,
Ian
One quick way to test your theory: try with a grey scale image that's muted (all values v --> 128+(v-128)/32 or something) and see if that makes the bands much wider or eliminates them completely.
It's always a good idea to nail down what the problem is first, and then try to fix it.
I can't help with the code, but I'd like to point out that the expected result on your blog is probably incorrect as well: look at the sharp black-gray border in the bottom part of the large object: it should not be there, as the maximum difference between two adjacent pixels should be 1.

Resources