Unity3D: TPS shooting without mouse aiming - mobile

I'm currently developing some TPS game. I have my player model and camera snapped to its shoulder, and some Empty game object in front of player at some distance for calculating vector for bullets (Yellow diamond at screenshot).
I'm developing for mobile platforms, so there is no mouse; just that Empty game object that points direction of the gun.
So when a fire event occurs I want to apply force to bullet and it will fly in right direction. Here is my code
b.transform.position = transform.position;
b.transform.position += transform.forward;
b.SetActive(true);
var rb = b.GetComponent<Rigidbody>();
print((Aim.position - transform.position).normalized);
rb.AddForce((Aim.position - transform.position).normalized * Thrust);
Aim is my EmptyGameObject that points direction, transform is GunEnd gameobject, and b is my bullet instance. So if I try shoot from default player position bulet flies correct from GunEnd to Aim object.
But if I rotate character for example more that 90 degree left, bullets start to fly in some weird trajectory
So, can anybody help me how to correct send bullets?

When you move it´s position with b.transform.position += transform.forward; you might be setting it in an odd place if the transform does not rotate when you aim (and by what I can see in the screenshot, it is not rotating as its components in the transform.rotate remain the same in y). Try moving it with the vector you find with the Aim, like this:
b.transform.position += (Aim.position - transform.position).normalized;

Related

Rubik's cube Thistlethwaite algotithm, check for good edges

I'm trying to build a rubik's cube solver in C using the Thistlethwaite algorithm.
I'm storing a cube as an array of 6 uint64_t integers (Faces).
Each of this faces stores 8 colors as one byte.
This structure let's me easily rotate faces using bit manipulation but I wonder if I should use something else that would be more appropriate for the Thistlethwaite algorithm.
The issue I'm having is checking if a cube is contained in the sub group G1 <L, R, F, B, U2,D2>
From what I understand, a cube that has correctly orientated edges is contained in this subgroup.
(see https://www.jaapsch.net/puzzles/thistle.htm)
The paper at the end of the page clearly indicates how to check if an edge is good or not but I could not find a way to implement it.
The question I have is: How to check in code if an edge is correctly oriented given a scrambled cube ?
According to the article, page 1:
Getting into G1
An edge piece is BAD if in taking it home an odd
number of quarter-turns of U and D faces is needed; otherwise it is GOOD
A different way of putting this: if you can manage to bring an edge home without ever using a U or D turn (so only turning the L, F, R and B faces), then an edge is good, otherwise it is bad.
So let's say you have a scrambled cube and are looking at one particular edge piece. Identify the position where it should end up (obviously based on the centre pieces). Let's say that one of the two colours of this edge is red. Then identify where the current place of that red face is in the following image:
Do the same for the place where that red side should end up.
If both places have the same colour (yellow or blue) in the above image, then the edge is good. If they have different colours in the above image, then the edge is bad.
You can easily see that if you had taken the other colour side of the edge in question (the not-red one), you would arrive at the same conclusion with this method.
Up to you to translate this to your data structure.

Predict Where UI Will Land Based on Velocity (scrollrect unity)

I know this may not be a normal question but I think you can help me figure it out.
Background: I want to create a scrollrect (a scrollable UI element) that snaps onto the elements it's scrolling. So that it always comes to a stop with an element in the center. The scrolling of these scroll rects is based on the velocity your finger was swiping at when it left the screen, and if you input a certain velocity it always moves the same amount (within 1 pixel).
So I figured the smoothest way to create this snapping scroll rect would be to predict where the scroll rect would land & then adjust the deceleration rate so that it landed on the nearest element instead.
So basically I would like to:
Turn this loop into a math function where I can input velocity & get out the movement delta.
Be able to figure out what the deceleration rate should be based on the end movement delta & velocity.
Here's the code that the scroll rect uses for its movement:
protected virtual void LateUpdate()
{
//It's probably easiest if you imagine positionX always starting at 0
//but I'm no expert
m_VelocityX *= Mathf.Pow(m_DecelerationRate, Time.unscaledDeltaTime);
if (Mathf.Abs(m_VelocityX) < 1)
{
m_VelocityX = 0;
}
positionX += m_VelocityX * Time.unscaledDeltaTime;
}
Where LateUpdate is called every frame, and positionX is the x position of the UI I am moving. (it holds the UI elements I want to snap too)
ScrollRect Code
LateUpdate Info
Mathf Info
Time.unscaledDeltaTime Info
And here are some velocities and movement deltas (how far it traveled), where the deceleration rate is 0.135 if that's helpful:
Velocity 500 -> 490
Velocity 350 -> 343
Velocity 200 -> 195
Velocity -200 -> -195
Velocity -400 -> -391
Ty all so much for the help! This math is way to hard for me to wrap my head around but I think it will end up being cool!

Add SCNText to SCNScene with ARKit

I just started studying for ARKitexample and Scenekit. I read a few Scenekit and found out that in order to add text, I need to use SCNText.
I try to write like this but it doesn't show.
guard let pointOfView = sceneView.pointOfView else { return }
let text = SCNText(string: "Hello", extrusionDepth: 4)
let textNode = SCNNode(geometry: text)
textNode.geometry = text
textNode.position = SCNVector3Make(pointOfView.position.x, pointOfView.position.y, pointOfView.position.z)
sceneView.scene.rootNode.addChildNode(textNode)
I just want to add some text (like "hello world") on SCNScene when user press button.
Edit
I saw that text but since I haven't set up plane (or anchor), I can't look at that as if I am in front of that text. How can I do?
You have at least two problems here.
If you set a node's position to match that of the camera, you probably won't see any of that node's content. You want to position things in front of the camera for them to be seen. A camera always looks in the -z direction of its local space. There's a ton of ways to do the requisite math, but here's one that might be handy (coded on phone, so YMMV):
textNode.simdPosition = pointOfView.simdPosition + pointOfView.simdWorldFront * 0.5
This should put your object half a meter in front of the camera (or rather, where the camera is at that moment — it won't follow the camera). It works because simdWorldFront is the vector (0,0,-1), which in local space means the direction the camera node points, converted from local space to world space.
The default font size for SCNText is something like 16. But that's in scene units, and scene units map to meters in ARKit. Also, the "text box" is anchored at its lower left. So quite likely your text isn't visible because it's sixteen meters tall and off to your right.
An easy way to handle this is by setting a scale or pivot on the node that makes its contents much smaller.

About finding pupil in a video

I am now working on an eye tracking project. In this project I am tracking eyes in a webcam video (resolution if 640X480).
I can locate and track the eye in every frame, but I need to locate the pupil. I read a lot of papers and most of them refer to Alan Yuille's deformable template method to extract and track the eye features. Can anyone help me with the code of this method in any languages (matlab/OpenCV)?
I have tried with different thresholds, but due to the low resolution in the eye regions, it does not work very well. I will really appreciate any kind of help regarding finding pupil or even iris in the video.
What you need to do is to convert your webcam to a Near-Infrared Cam. There are plenty of tutorials online for that. Try this.
A Image taken from an NIR cam will look something like this -
You can use OpenCV then to threshold.
Then use the Erode function.
After this fill the image with some color takeing a corner as the seed point.
Eliminate the holes and invert the image.
Use the distance transform to the nearest non-zero value.
Find the max-value's coordinate and draw a circle.
If you're still working on this, check out my OptimEyes project: https://github.com/LukeAllen/optimeyes
It uses Python with OpenCV, and works fairly well with images from a 640x480 webcam. You can check out the "Theory Paper" and demo video on that page also. (It was a class project at Stanford earlier this year; it's not very polished but we made some attempts to comment the code.)
Depending on the application for tracking the pupil I would find a bounding box for the eyes and then find the darkest pixel within that box.
Some psuedocode:
box left_location = findlefteye()
box right_location = findrighteye()
image_matrix left = image[left_location]
image_matrix right = image[right_location]
image_matrix average = left + right
pixel min = min(average)
pixel left_pupil = left_location.corner + min
pixel right_pupil = right_location.corner + min
In the first answer suggested by Anirudth...
Just apply the HoughCirles function after thresholding function (2nd step).
Then you can directly draw the circles around the pupil and using radius(r) and center of eye(x,y) you can easily find out the Center of Eye..

Evaluation function of an abstract strategy game

I'm coding an abstract strategy game with C# & XNA. As for the AI, I'm currently using Negascout and a depth of 5. The following is the description of the game:
The game consists of a board of 6x7 hexagonal locations, 42 hexagonal tiles, and 6 pieces (1 king & 5 pawns) for each player (max 2 players).
During the first phase of the game, the players alternately place a random tile on an empty location of the board. Each tile can have a maximum of 6 arrows pointing at the edges. Some arrows can be double-pointed. The arrows mean the direction/s of movement from that tile. A double-pointed arrow makes a piece move/jump 2 locations if there's a valid location. The players are not allowed to place tiles in their opponent's row if there are still empty locations left on the board.
Once this phase is complete, the next player in turn places his king on any one of the 6 tiles of the row nearest to him. Next, the movement of the pieces commences. Pieces are moved according to the arrows on the tiles. The game is won by capturing or blocking the king.
Ok, so now to my move generation function.
Tile placement stage
a) Place a tile on the nearest row. The tile is rotated to find the optimal rotation.
b) Once the nearest row is full, place a tile on an empty location that is surrounded by locations on all sides (ie no edge of board). Rotate the tile to find the optimal rotation.
c) If no locations are found, add all remaining empty locations, trying to find the optimal rotation.
King placement stage
a) Locate the location with the best tile and place the king there.
b) Place the remaining pawns on the remaining empty locations on the row.
Movement stage
a) If king is attacked, try to attack the attacking piece if that piece is not defended.
b) Add moves for all player's pieces that are being attacked.
c) Add all opponent pieces that the player can attack.
d) Add all locations player can move to.
Now to the evaluation function.
Tile placement stage
score = No. of tiles current player placed so far + current player's tiles on the nearest row - no. of tiles opponent placed so far - opponent's tiles on the furthest row (nearest to the opponent).
King placement stage
score = current player's tiles on the nearest row - opponent's tiles on the furthest row (nearest to the opponent).
Movement stage
score = current player's pieces' value - opponent's pieces' value.
The weighting of the tiles is 100 for every valid location an arrow points to. The weighting of the pieces is as follows:
piece value = piece type (king = 10000, pawn = 1000) + mobility + defended - attacked - enprise - blocked
where:
mobilty = no. of locations node can move to (free or occupied by the opponent) * 1000
defended = no. of current player pieces surrounding this piece that can actually move to this location * 1000
attacked = no. of opponent pieces surrounding this piece that can actually move to this location * 1000
blocked = (king = -10000, pawn = -1000) piece cannot move because all arrows point to invalid locations and the piece has no chance of moving again in this game.
Quite long, but here come my problems:
When placing tiles, the AI sometimes places a tile using the wrong rotation (ie. places a tile in location where the arrows point to no valid locations). Sometimes this occurs in his 'home' row.
When moving pieces, the AI is ignoring king safety. Moves mostly the king and is captured in about 4-6 moves.
Anybody, especially with chess AI experience, has ideas and suggestions on how to improve my AI, in particular my move generation and evaluation functions?
Thanks
Ivan
btw... If anyone is interested in trying out the mail, just let me know and I'll upload a setup on my website.
Quite long, but here come my problems:
Quite long, indeed.
When placing tiles, the AI sometimes places a tile using the wrong rotation (ie. places a tile in location where the arrows point to no valid locations). Sometimes this occurs in his 'home' row.
In other words, you have a bug in your code. There's no way to answer this question, even with all this extensive preamble. This question should be in a separate, concisely-phrased question that includes a copy of the relevant code.
When moving pieces, the AI is ignoring king safety. Moves mostly the king and is captured in about 4-6 moves.
Same as above. This question can't be answered based on even the extensive preamble you've written.
My advice to you is to be more concise with your questions, post only the details relevant to the problem, and not to combine multiple questions into a single post.
Anybody, especially with chess AI experience, has ideas and suggestions on how to improve my AI, in particular my move generation and evaluation functions?
This is an overly vague question that would normally get closed. If you want some advice about your code, you would have to provide that code in order for anyone to give you a helpful answer that goes beyond blind speculation!

Resources