How to make turret pointing at an object using 2d frames in tower defense games? - openfl

I'm working on tower defense game and I'm using stencyl.
I want to make 2d tower defense game like (clash of clans), so I want to know how to make a turret pointing at an object using frames like (canon in clash of clans).
I mean when an object enters the range of tower the tower will point at it without rotating the tower but using 2d frames instead by some way using code or mathematical way.

I've found the solution.
Do this :
float Direction = 0;
float FinalDirection = 0;
float DirectionDegree = 0;
int NumberOfDirections = 0; // eg: 24 or 32 or even 128 or anything Directions
DirectionDegree = 360 / NumberOfDirections;
void update() // this will run every frame
{
Direction = Math.atan2(target.y - tower.y, target.x - tower.x ) * (180 / Math.PI);
if(Direction < 0)
{
Direction += 360;
}
FinalDirection = Direction / DirectionDegree;
tower.frame = FinalDirection;
}

Related

Intesection problem with Möller-Trumbore algorithm on 1 dimension of the triangle

I am currently working on a raytracer project and I just found out a issue with the triangle intersections.
Sometimes, and I don't understand when and why, some of the pixels of the triangle don't appear on the screen. Instead I can see the object right behind it. It only occurs on one dimension of the triangle and it depends on the camera and the triangle postions (e.g. picture below).
Triangle with pixels missing
I am using Möller-Trumbore algorithm to compute every intersection. Here's my implementation :
t_solve s;
t_vec v1;
t_vec v2;
t_vec tvec;
t_vec pvec;
v1 = vec_sub(triangle->point2, triangle->point1);
v2 = vec_sub(triangle->point3, triangle->point1);
pvec = vec_cross(dir, v2);
s.delta = vec_dot(v1, pvec);
if (fabs(s.delta) < 0.00001)
return ;
s.c = 1.0 / s.delta;
tvec = vec_sub(ori, triangle->point1);
s.a = vec_dot(tvec, pvec) * s.c;
if (s.a < 0 || s.a > 1)
return ;
tvec = vec_cross(tvec, v1);
s.b = vec_dot(dir, tvec) * s.c;
if (s.b < 0 || s.a + s.b > 1)
return ;
s.t1 = vec_dot(v2, tvec) * s.c;
if (s.t1 < 0)
return ;
if (s.t1 < rt->t)
{
rt->t = s.t1;
rt->last_obj = triangle;
rt->flag = 0;
}
The only clue at the moment is that by using a different method of calculating my ray (called dir in the code), the result is that I have less pixels missing.
Moreover, when I turn the camera and look behind, I see that the bug occurs on the opposite side of the triangle. All of this make me think that the issue is mainly linked with the ray..
Take a look at Watertight Ray/Triangle Intersection. I would much appropriate if you could provide a minimal example where a ray should hit the triangle, but misses it. I had this a long time ago with the Cornel Box - inside the box there were some "black" pixels because on edges none of the triangles has been hit. It's a common problem stemming from floating-point imprecision.

implementing object detection like openCV

I'm trying to implement the Viola-Jones algorithm for object detection using Haar cascades (like openCV's implementation) in C, to detect faces. I writing the C code in a Vivado HLS compatible way, so I can port the the implementation to an FPGA. My main goal is to learn as much as possible, rather than just getting it to work. I would also appreciate any help with improving my question.
I basically started reading G. Bradski's Learning openCV, watched some online tutorials and got started writing the code. Sure enough its not detecting faces and I don't know why. At this point I care more about understanding my mistakes rather than beeing able to detect faces.
My Implementation Steps
I'm not sure how much detail is appropriate, but to keep it short:
Extracting Haar cascade data from haarcascade_frontalface_default.xml to C readable structures (huge arrays)
Writing a function to create an integral image of any given 8bit greyscale image of size 24x24 (same size as listed in the cascade)
Applying knowledge from this great post to make the necessary calculations
My Testing Scheme
Implementing a python script to detect faces using the openCV library with the same Haar cascade as mentioned above to create golden data, a detected face is cut out (ensuring 24x24 size) from the image and stored.
Stored images are converted to one dimensional C arrays, containing pixel values row-wise: img = {row0col0, row0col1, row1col0, row1col1, ... }
integral image is calculated and face detection applied
Result
Faces pass only 6 from 25 stages of the Haar cascade and are therefore not detected by my implementation, where I know they should have been detected since the python script with openCV and the same Haar cascade did indeed detect them.
My Code
/*
* This is detectFace.c
*/
#include <stdio.h>
#include "detectFace.h"
// define constants based on Haar cascade in use
// Each feature is made of max 3 rects
//#define FEAT_NO 1 // max no. of features (= 2912 for face_default.xml)
#define RECTS_IN_FEAT 3 // max no. of rect's per feature
//#define INTS_IN_RECT 5 // no. of int's needed to describe a rect
// each node has one feature (bijective relation) and three doubles
#define STAGE_NO 25 // no. of stages
#define NODE_NO 211 // no of nodes per stage, corresponds to FEAT_NO since each Node has always one feature in haarcascade_frontalface_default.xml
//#define ELMNT_IN_NODE 3 // no. of doubles needed to describe a node
// constants for frame size
#define WIN_WIDTH 24 // width = height =24
//int detectFace(int features[FEAT_NO][RECTS_IN_FEAT][INTS_IN_RECT], double stages[STAGE_NO][NODE_NO][ELMNT_IN_NODE], double stageThresh[STAGE_NO], int ii[24][24]){
int detectFace(
int ii[576],
int stageNum,
int stageOrga[25],
float stageThresholds[25],
float nodes[8739],
int featOrga[2913],
int rectangles[6383][5])
{
int passedStages = 0; // number of stages passed in this run
int faceDetected = 0; // turns to 1 if face is detected and to 0 if its not detected
// Debug:
int nodesUsed = 0; // number of floats out of nodes[] processed, use to skip to the unprocessed floats
int rectsUsed = 0; // number of rects processed
int droppedInStage0 = 0;
// loop through all stages
int i;
detectFace_label1:
for (i = 0; i < STAGE_NO; i++)
{
double tmp = 0.0; //variable to accumulate node-values, to then compare to stage threshold
int nodeNum = stageOrga[i]; // get number of nodes for this stage from stageOrga using stage index i
// loop through nodes inside each stage
// NOTE: it is assumed that each node maps to one corresponding feature. Ex: node[0] has feat[0) and node[1] has feat[1]
// because this is how it is written in the haarcascade_frontalface_default.xml
int j;
detectFace_label0:
for (j = 0; j < NODE_NO; j++)
{
// a node is defined by 3 values:
double nodeThresh = nodes[nodesUsed]; // the first value is the node threshold
double lValue = nodes[nodesUsed + 1]; // the second value is the left value
double rValue = nodes[nodesUsed + 2]; // the third value is the right value
int sum = 0; // contains the weighted value of rectangles in one Haar feature
// loop through rect's in a feature, some have 2 and some have 3 rect's.
// Each node always refers to one feature in a way that node0 maps to feature0 and node1 to feature1 (The XML file is build like that)
//int rectNum = featOrga[j]; // get number of rects for current feature using current node index j
int k;
detectFace_label2:
for (k = 0; k < RECTS_IN_FEAT; k++)
{
int x = 0, y = 0, width = 0, height = 0, weight = 0, coordUpL = 0, coordUpR = 0, coordDownL = 0, coordDownR = 0;
// a rect is defined by 5 values:
x = rectangles[rectsUsed][0]; // the first value is the x coordinate of the top left corner pixel
y = rectangles[rectsUsed][1]; // the second value is the y coordinate of the top left corner pixel
width = rectangles[rectsUsed][2]; // the third value is the width of the current rectangle
height = rectangles[rectsUsed][3]; // the fourth value is the height of this rectangle
weight = rectangles[rectsUsed][4]; // the fifth value is the weight of this rectangle
// calculating 1-Dim index for points of interest. Formula: index = width * row + column, assuming values are stored in row order
coordUpL = ((WIN_WIDTH * y) - WIN_WIDTH) + (x - 1);
coordUpR = coordUpL + width;
coordDownL = coordUpL + (height * WIN_WIDTH);
coordDownR = coordDownL + width;
// calculate the area sum according to Viola-Jones
//sum += (ii[x][y] + ii[x+width][y+height] - ii[x][y+height] - ii[x+width][y]) * weight;
sum += (ii[coordUpL] + ii[coordDownR] - ii[coordUpR] - ii[coordDownL]) * weight;
// Debug: counting the number of actual rectangles used
rectsUsed++; //
}
// decide whether the result of the feature calculation reaches the node threshold
if (sum < nodeThresh)
{
tmp += lValue; // add left value to tmp if node threshold was not reached
}
else
{
tmp += rValue; // // add right value to tmp if node threshold was reached
}
nodesUsed = nodesUsed + 3; // one node is processed, increase nodesUsed by number of floats needed to represent a node (3)¬
}
//######## at this point we went through each node in the current stage #######
// check if threshold of current stage was reached
if (tmp < stageThresholds[i])
{
faceDetected = 0; // if any stage threshold is not reached the operation is done and no face is present
// Debug: show in which stage the frame was dropped
printf("Face detection failed in stage %d \n", i);
//i = stageNum; // breaks out this loop, because i is supposed to stay smaller than STAGE_NO
}
else
{
passedStages++; // stage threshold is reached, therefore passedStages will count up
}
}
//######## at this point we went through all stages ###############################
//----------------------------------------------------------------------------------
// if the number of passed stages reaches the total number of stages, a face is detected
if (passedStages == stageNum)
{
faceDetected = 1; // one symbolizes that the input is a face
}
else
{
faceDetected = 0; // zero symbolizes that the input is not a face
};
return faceDetected;
}

Moving sprites in sine wave

can someone slap me an idea or math formula on how to make my enemies move in a sine wave
tried something like this but they just move at the same time so they just create a straight line of enemies moving left and right.
for(int i = 0; i < 5; i++){
float y = sinf( 100+delta_time*0.06f) * 75;
float x = game->enemy[i].base_x + y;
game->enemy[i].x = x ;
game->enemy[i].y += 1;
SDL_Rect rect = { game->enemy[i].x , game->enemy[i].y ,game->enemy[i].w, game->enemy[i].h};
SDL_RenderCopy(game->renderer , game->enemy[i].sprite , NULL , &rect);
}
Let v=(v_x,v_y) be the overall direction of the enemy. Let o be the vector o=(-v_y/||v||,v_x/||v||) where ||v||=sqrt(v_x.v_x+v_y.v_y) is the norm of v. The vector o is perpendicular to v. A sinusoidal motion is wanted in that direction. Consequently, the position p(t)=(x(t),y(t)) is defined as :
x(t)=v_x.t-A.v_y/||v||.sin(w.t)
y(t)=v_y.t+A.v_x/||v||.sin(w.t)
where A is the magnitude of the ossilations and w the pulsation of the ossilations. The corresponding frequency is f=w/(2pi).Then, the wavelength lambda=||v||/f corresponds to the length of ossilations.
If the enemy is moving in the x direction (v_y=0) then :
x(t)=v_x.t
y(t)=A.sin(w.t)
The length of ossilations is lambda=2pi.v_x/w.

Opengl make "AI" paddle move up and down

I'm a bit of an openGL/programming noobie so I'm trying to make an "AI" for the right paddle. I know this isnt the proper way of doing it, what I SHOULD be doing is making it follow the ball. But right now I'm just trying to make it perpetually move up and down. I can't figure out how to do it, trying to use If loops like
if (paddle.pos[1] > 1){
paddle.pos[1] = paddle.pos[1] - delta}
I set delta to something like 0.01, 1 is the top of the screen. Obviously this isnt right because as soon as it goes down below 1 it goes up again, but I'm trying to do something like it.
2nd question - How do you move the ball from 0,0 when it starts? Kind of the same problem, am using if statements with the x values but thats definitely not right.
This is using C by the way.
Try something like this to make pos repeatedly go from 0 to 1 and back to 0:
// Initialize.
float pos = 0.0f;
float delta = 0.01f;
// On every update.
pos += delta;
if (pos > 1.0f) {
pos = 1.0f;
delta = -delta;
} else if (pos < 0.0f) {
pos = 0.0f;
delta = -delta;
}
The key here is that you invert the sign of your increment each time you reach one of the end positions.

Pruning short line segments from edge detector output?

I am looking for an algorithm to prune short line segments from the output of an edge detector. As can be seen in the image (and link) below, there are several small edges detected that aren't "long" lines. Ideally I'd like just the 4 sides of the quadrangle to show up after processing, but if there are a couple of stray lines, it won't be a big deal... Any suggestions?
Image Link
Before finding the edges pre-process the image with an open or close operation (or both), that is, erode followed by dilate, or dilate followed by erode. this should remove the smaller objects but leave the larger ones roughly the same.
I've looked for online examples, and the best I could find was on page 41 of this PDF.
I doubt that this can be done with a simple local operation. Look at the rectangle you want to keep - there are several gaps, hence performing a local operation to remove short line segments would probably heavily reduce the quality of the desired output.
In consequence I would try to detect the rectangle as important content by closing the gaps, fitting a polygon, or something like that, and then in a second step discard the remaining unimportant content. May be the Hough transform could help.
UPDATE
I just used this sample application using a Kernel Hough Transform with your sample image and got four nice lines fitting your rectangle.
In case somebody steps on this thread, OpenCV 2.x brings an example named squares.cpp that basically nails this task.
I made a slight modification to the application to improve the detection of the quadrangle
Code:
#include "highgui.h"
#include "cv.h"
#include <iostream>
#include <math.h>
#include <string.h>
using namespace cv;
using namespace std;
void help()
{
cout <<
"\nA program using pyramid scaling, Canny, contours, contour simpification and\n"
"memory storage (it's got it all folks) to find\n"
"squares in a list of images pic1-6.png\n"
"Returns sequence of squares detected on the image.\n"
"the sequence is stored in the specified memory storage\n"
"Call:\n"
"./squares\n"
"Using OpenCV version %s\n" << CV_VERSION << "\n" << endl;
}
int thresh = 70, N = 2;
const char* wndname = "Square Detection Demonized";
// helper function:
// finds a cosine of angle between vectors
// from pt0->pt1 and from pt0->pt2
double angle( Point pt1, Point pt2, Point pt0 )
{
double dx1 = pt1.x - pt0.x;
double dy1 = pt1.y - pt0.y;
double dx2 = pt2.x - pt0.x;
double dy2 = pt2.y - pt0.y;
return (dx1*dx2 + dy1*dy2)/sqrt((dx1*dx1 + dy1*dy1)*(dx2*dx2 + dy2*dy2) + 1e-10);
}
// returns sequence of squares detected on the image.
// the sequence is stored in the specified memory storage
void findSquares( const Mat& image, vector<vector<Point> >& squares )
{
squares.clear();
Mat pyr, timg, gray0(image.size(), CV_8U), gray;
// karlphillip: dilate the image so this technique can detect the white square,
Mat out(image);
dilate(out, out, Mat(), Point(-1,-1));
// then blur it so that the ocean/sea become one big segment to avoid detecting them as 2 big squares.
medianBlur(out, out, 3);
// down-scale and upscale the image to filter out the noise
pyrDown(out, pyr, Size(out.cols/2, out.rows/2));
pyrUp(pyr, timg, out.size());
vector<vector<Point> > contours;
// find squares only in the first color plane
for( int c = 0; c < 1; c++ ) // was: c < 3
{
int ch[] = {c, 0};
mixChannels(&timg, 1, &gray0, 1, ch, 1);
// try several threshold levels
for( int l = 0; l < N; l++ )
{
// hack: use Canny instead of zero threshold level.
// Canny helps to catch squares with gradient shading
if( l == 0 )
{
// apply Canny. Take the upper threshold from slider
// and set the lower to 0 (which forces edges merging)
Canny(gray0, gray, 0, thresh, 5);
// dilate canny output to remove potential
// holes between edge segments
dilate(gray, gray, Mat(), Point(-1,-1));
}
else
{
// apply threshold if l!=0:
// tgray(x,y) = gray(x,y) < (l+1)*255/N ? 255 : 0
gray = gray0 >= (l+1)*255/N;
}
// find contours and store them all as a list
findContours(gray, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
vector<Point> approx;
// test each contour
for( size_t i = 0; i < contours.size(); i++ )
{
// approximate contour with accuracy proportional
// to the contour perimeter
approxPolyDP(Mat(contours[i]), approx, arcLength(Mat(contours[i]), true)*0.02, true);
// square contours should have 4 vertices after approximation
// relatively large area (to filter out noisy contours)
// and be convex.
// Note: absolute value of an area is used because
// area may be positive or negative - in accordance with the
// contour orientation
if( approx.size() == 4 &&
fabs(contourArea(Mat(approx))) > 1000 &&
isContourConvex(Mat(approx)) )
{
double maxCosine = 0;
for( int j = 2; j < 5; j++ )
{
// find the maximum cosine of the angle between joint edges
double cosine = fabs(angle(approx[j%4], approx[j-2], approx[j-1]));
maxCosine = MAX(maxCosine, cosine);
}
// if cosines of all angles are small
// (all angles are ~90 degree) then write quandrange
// vertices to resultant sequence
if( maxCosine < 0.3 )
squares.push_back(approx);
}
}
}
}
}
// the function draws all the squares in the image
void drawSquares( Mat& image, const vector<vector<Point> >& squares )
{
for( size_t i = 1; i < squares.size(); i++ )
{
const Point* p = &squares[i][0];
int n = (int)squares[i].size();
polylines(image, &p, &n, 1, true, Scalar(0,255,0), 3, CV_AA);
}
imshow(wndname, image);
}
int main(int argc, char** argv)
{
if (argc < 2)
{
cout << "Usage: ./program <file>" << endl;
return -1;
}
static const char* names[] = { argv[1], 0 };
help();
namedWindow( wndname, 1 );
vector<vector<Point> > squares;
for( int i = 0; names[i] != 0; i++ )
{
Mat image = imread(names[i], 1);
if( image.empty() )
{
cout << "Couldn't load " << names[i] << endl;
continue;
}
findSquares(image, squares);
drawSquares(image, squares);
imwrite("out.jpg", image);
int c = waitKey();
if( (char)c == 27 )
break;
}
return 0;
}
The Hough Transform can be a very expensive operation.
An alternative that may work well in your case is the following:
run 2 mathematical morphology operations called an image close (http://homepages.inf.ed.ac.uk/rbf/HIPR2/close.htm) with a horizontal and vertical line (of a given length determined from testing) structuring element respectively. The point of this is to close all gaps in the large rectangle.
run connected component analysis. If you have done the morphology effectively, the large rectangle will come out as one connected component. It then only remains iterating through all the connected components and picking out the most likely candidate that should be the large rectangle.
Perhaps finding the connected components, then removing components with less than X pixels (empirically determined), followed by dilation along horizontal/vertical lines to reconnect the gaps within the rectangle
It's possible to follow two main techniques:
Vector based operation: map your pixel islands into clusters (blob, voronoi zones, whatever). Then apply some heuristics to rectify the segments, like Teh-Chin chain approximation algorithm, and make your pruning upon vectorial elements (start, endpoint, length, orientation and so on).
Set based operation: cluster your data (as above). For every cluster, compute principal components and detect lines from circles or any other shape by looking for clusters showing only 1 significative eigenvalue (or 2 if you look for "fat" segments, that could resemble to ellipses). Check eigenvectors associated with eigenvalues to have information about orientation of the blobs, and make your choice.
Both ways could be easily explored with OpenCV (the former, indeed, falls under "Contour analysis" category of algos).
Here is a simple morphological filtering solution following the lines of #Tom10:
Solution in matlab:
se1 = strel('line',5,180); % linear horizontal structuring element
se2 = strel('line',5,90); % linear vertical structuring element
I = rgb2gray(imread('test.jpg'))>80; % threshold (since i had a grayscale version of the image)
Idil = imdilate(imdilate(I,se1),se2); % dilate contours so that they connect
Idil_area = bwareaopen(Idil,1200); % area filter them to remove the small components
The idea is to basically connect the horizontal contours to make a large component and filter by an area opening filter later on to obtain the rectangle.
Results:

Resources