I'd like to make some specific surfaces such as rotating surfaces, ruled surfaces,and canal surfaces, and export them as obj file. I try to do this by the software maya, How to do this?
Related
I'm creating a simple graphic editor by C/glut. So I just wonder how can I save my draws as PNG, BMP etc. I tried png.h but didn't work for me. I didn't get any errors but it did not save anything.
Any advice?
You can easily save bmp images from your app using libsoil.
http://lonesock.net/soil.html
For example, I used:
int save_result = SOIL_save_screenshot(
filename,
SOIL_SAVE_TYPE_BMP,
0,0,
width, height
);
If you are a debian user, there is a package named libsoil-dev. It should be available in Ubuntu too.
OpenGL glutWireCube works but glutWireCylinder doesn't.
glutWireCylinder throws an 'undefined' error. How can this be?
What am I doing wrong?
That function is not defined because GLUT does not have a function named glutWireCylinder(). This is based on the official GLUT documentation here:
https://www.opengl.org/documentation/specs/glut/spec3/spec3.html
If you don't mind using legacy features, the GLU (GL Utility) library has a function for drawing cylinders (see man page for details).
// setup
GLUquadric* quadric = gluNewQuadric();
gluQuadricDrawStyle(quadric, GLU_LINE);
// drawing
gluCylinder(radius, radius, height, 32, 8);
// cleanup
gluDeleteQuadrc(quadric);
If you don't want to use deprecated libraries, writing code to draw a cylinder is easy to write yourself. My answer here shows how to draw a circle, which gets you most of the way to drawing a cylinder: How to draw a circle using VBO in ES2.0.
I’m learning SceneKit by writing a game where you’re flying through an asteroid field dodging objects. Initially, I did this by moving/rotating the camera, but I realized that at some point I’d run out of coordinate space and it’s probably better to move all of the objects toward the camera (and dispose of them when I’ve “passed” them).
But I can’t seem to get them to move. My original code that moved the camera looked like this:
[cameraNode setTransform:CATransform3DTranslate(cameraNode.transform, 0.f, 0.f, -2.f)];
I thought I could do something similar with each asteroid node:
[asteroidNode setTransform:CATransform3DTranslate(cameraNode.transform, 0.f, 0.f, 2.f)];
but they don’t move. If I add a basic animation:
CABasicAnimation *anim = [CABasicAnimation animationWithKeyPath:#"position.z"];
anim.byValue = #10;
anim.duration = 1.0;
[asteroidNode addAnimation:anim forKey:#"move forward"];
the asteroids move but predictably snap back to their original location when it’s done.
This feels like a rookie mistake but I can’t find anything addressing this problem online. Am I going about this the wrong way?
Thanks,
Jeff
moving the cameraNode the way you do it should work but make sure "cameraNode" is your current pointOfView or it will have no effect (check that scnView.pointOfView == cameraNode).
If you want to move the nodes instead you should translate "node.transform" (not "cameraNode.transform"). But Actually it's simpler to just do:
node.position = SCNVector3Make(node.position.x, node.position.y, node.position.z+2.0);
Also make sure there is no animation or physics running on these nodes that could override your changes.
Given an image (i.e. newspaper, scanned newspaper, magazine etc), how do I detect the region containing text? I only need to know the region and remove it, don't need to do text recognition.
The purpose is I want to remove these text areas so that it will speed up my feature extraction procedure as these text areas are meaningless for my application. Anyone know how to do this?
BTW, it will be good if this can be done in Matlab!
Best!
You can use Stroke Width Transform (SWT) to highlight text regions.
Using my mex implementation posted here, you can
img = imread('http://i.stack.imgur.com/Eyepc.jpg');
[swt swtcc] = SWT( img, 0, 10 );
Playing with internal parameters of the edge-map extraction and image filtering in SWT.m can help you tweak the resulting mask to your needs.
To get this result:
I used these parameters for the edge map computation in SWT.m:
edgeMap = single( edge( img, 'canny', [0.05 0.25] ) );
Text detection in natural images is an active area of research in computer vision community. U can refer to ICDAR papers. But in your case I think it should be simple enough. As you have text from newspaper or magazines, it should be of fixed size and horizontally oriented.
So, you can apply scanning window of a fixed size, say 32x32. Train it on ICDAR 2003 training dataset for positive windows having text in it. U can use a small feature set of color and gradients and train an SVM which would give a positive or negative result for a window having text or not.
For reference go to http://crypto.stanford.edu/~dwu4/ICDAR2011.pdf . For code, you can try their homepages
This example in the Computer Vision System Toolbox in Matlab shows how to detect text using MSER regions.
If your image is well binarized and you know the usual size of the text you could use the HorizontalRunLengthSmoothing and VerticalRunLengthSmoothing algorithms. They are implemented in the open source library Aforge.Net but it should be easy to reimplement them in Matlab.
The intersection of the result image from these algorithm will give you a good indication that the region contains text, it is not perfect but it is fast.
I am now working on an eye tracking project. In this project I am tracking eyes in a webcam video (resolution if 640X480).
I can locate and track the eye in every frame, but I need to locate the pupil. I read a lot of papers and most of them refer to Alan Yuille's deformable template method to extract and track the eye features. Can anyone help me with the code of this method in any languages (matlab/OpenCV)?
I have tried with different thresholds, but due to the low resolution in the eye regions, it does not work very well. I will really appreciate any kind of help regarding finding pupil or even iris in the video.
What you need to do is to convert your webcam to a Near-Infrared Cam. There are plenty of tutorials online for that. Try this.
A Image taken from an NIR cam will look something like this -
You can use OpenCV then to threshold.
Then use the Erode function.
After this fill the image with some color takeing a corner as the seed point.
Eliminate the holes and invert the image.
Use the distance transform to the nearest non-zero value.
Find the max-value's coordinate and draw a circle.
If you're still working on this, check out my OptimEyes project: https://github.com/LukeAllen/optimeyes
It uses Python with OpenCV, and works fairly well with images from a 640x480 webcam. You can check out the "Theory Paper" and demo video on that page also. (It was a class project at Stanford earlier this year; it's not very polished but we made some attempts to comment the code.)
Depending on the application for tracking the pupil I would find a bounding box for the eyes and then find the darkest pixel within that box.
Some psuedocode:
box left_location = findlefteye()
box right_location = findrighteye()
image_matrix left = image[left_location]
image_matrix right = image[right_location]
image_matrix average = left + right
pixel min = min(average)
pixel left_pupil = left_location.corner + min
pixel right_pupil = right_location.corner + min
In the first answer suggested by Anirudth...
Just apply the HoughCirles function after thresholding function (2nd step).
Then you can directly draw the circles around the pupil and using radius(r) and center of eye(x,y) you can easily find out the Center of Eye..