A data design question of labeling sides in a battle game - database

I am designing a chess-like game but I believe this question can be extend to all games that have battle.
So my question is, how should I properly label an object as a friend
object or an enemy object in a battle game?
Take chess for example. If player W plays white pieces, then W's opponent, player B, plays black pieces.
For W, all white pieces are friend objects and black pieces are enemy objects, while player B has black friends and white enemies. So here my labeling problem can be simply solved by using colors. When you try to calculate the possible moves of a piece and decide if you can take out a piece, you can basically check if they have the distinct colors.
However, what if two players have the same view? For example, a game can color all friends as black while color all enemies as white, and the players start their games with their pieces all at the bottom. In this sense of speaking, indeed, pieces can be viewed as the attributes of the player. So the coloring of black and white is based on whether a piece has an owner id that is the same as yours.
Here comes a problem:
Although I can decide the color by comparing the ids, what if I want
to know whether a friend piece enters the enemy region?
The definition of enemy region could be the upper part of the board. Since board is not part of the player (while pieces can be), it is not possible to use the previous id solution.
So one possible solution I can think of is to make the board an attribute of the player. However, there are two players, this doubles the data storage because we have actually only one board from an objective perspective. And any moves and further changes on a piece requires operations on two boards, which is also time consuming.
In order not to double the board, I try use to another strategy, which is to use a landscape view. What I mean is that, the data doesn't take sides as players. Players view the board from bottom to top, which is like the portrait view, while data can be viewed from left to right/right to left like what a just referee observes. So the data becomes objective here. But the problem is how to present the objective data in a subjective manner to players? And it is still not easy to know if a piece enters enemy region.
So my question is, how should I properly design the data structure, so that it does not doubles or triples or copies multiple times of the data, and in the meanwhile, it is also easy to know the labels of a public area? In my question, the public area is a chess board, where all pieces should fight on this single board. I call it public because both players can move their pieces to everywhere on the board, and third parties can also see what's going on the board. But the labels of pieces are just subjective attributes, personal private thoughts.
I am not sure if I can properly explain/phrasing my question well so I didn't search for a similar question. Sorry for my laziness. If there is a duplicate question that has been asked, please guide me to it.
---Update---
I think a more clearer question is, let's say I have 3 * 3 board. If I need to move my piece from (1, 2) to (2, 3), then in the perspective of my opponent, the move is not taking place at the same location. It is the opposite. It is from (3, 2) to (2, 1). What is the best way to store this move? Do I have to take one side (subjective) to store it, or is there a neutral/objective way to do this?
The drawback of the subjective way for me is that I need to recalculate the moves for another player to fit his/her view. Using a neutral way might be a life saver.

A general answer without concrete database design - There is data an there are multiple views. There is no need to copy data but to have multiple references to some data (which of course also takes some memory). E. g. a rook is referenced by a square and by a player. Such references may be stored (collection "enemies") or may be calculated on the fly (collection "reachable") depending on runtime/storage design decisions. Also multiple players are just views. On a game move you have to update all static views while dynamic views need none.
For symmetric games there is the possibility not to store the opponents views but generate them by just transforming the whole game into the other players view with every move. Again this is your decision, stored vs. dynamically created.

If I need to move my piece from (1, 2) to (2, 3), then in the perspective of my opponent, the move is not taking place at the same location. It is the opposite. It is from (3, 2) to (2, 1). What is the best way to store this move? Do I have to take one side (subjective) to store it, or is there a neutral/objective way to do this?
Pick one orientation for the board. It doesn't matter which one you pick. This is a neutral/objective way -- neither choice offers any advantage for either player, and the choice doesn't significantly affect the code. Any imbalance you imagine, like that it's better to have (0,0) on your own side, is imaginary.
This is not a "subjective" choice, it is an "arbitrary" choice.

Related

Problem in designing an algorithm that moves structures in an array[][] in a certain way

I am a beginner C programmer.
I have been failing to find an algorythm that can solve the following problem:
On an array "board[x][y]" with two dimensions, which contains
following arranged elements:
Floor (white), Item (blue), Backpack (green) and Player (orange), the Player can move and can move Items by directly "touching" them, in a way that they move into the same direction and
stay attatched. "Touching" is defined by an Item being on either of
the four sides of the Player.
Graphic 1 describing predicted movement
If there is a Backpack attatched to the Player, the Backpack itself
acts as kind of a sticky attachment, moving all Items attached to that
Backpack, including other Backpacks.
Graphic 2 describing predicted movement
Is there an algorithm that can sucessfully move the resulting "structures" formable by the rules, only moving Items "attached"? If you can help me find a way or guide me on a path, I'd be very happy to learn about it.
Thank you in advance.
Store in a list the positions of the player and of the attached items and backpacks. When you perform a move, all elements in the list move. After the move, consider the player and backpacks in the list, and add the neighboring items or backpacks that are not already there.
[But see my second comment.]

Clutter: Perspective, Skew, and Matrices

Is there a way to change the clutter perspective for a given container or widget?
The clutter perspective controls how all the clutter actors on the screen are displayed when rotated, translated, scaled, etc.
What I would really like to do is to change the perspective's origin from the center of the screen to another coordinate.
I have messed with a few of the stage methods. However, I haven't had much luck understanding some of the results, and often I hit some stability issues.
I know there are transformation matrices that do all the logic under the hood, and there are documented ways to change the transform matrices. Honestly, I haven't researched much further and just though I would ask for guidance before spending a lot of time on it.
Which leads me to another question regarding the matrices and transformations. Can one of these matrices be used to skew an actor? Or deform it into a trapezoid, etc? And any idea how to get started on that, ie. what a skew matrix would look like?
Finally, does anyone know why the clip path was deprecated? It seems that would have worked for what I ultimately want to do: draw irregular shaped 2d objects on the screen If I can implement an answer to question 2, then I guess a clip box with a transformation can be used here.
1, I do not know if (or how) one might change the Clutter stage's focal point.
2 A skew or shear transformation matrix is easy enough to construct, and can be implemented in the GJS Clutter functions Clutter.Actor.set_transform(T) and Clutter.Actor.set_child_transform(T) where T is a Clutter.Matrix .
This does present another problem, however, for the current codebase; and this leads to another question. (I guess I should post it somewhere else). But, when a transform is set on a clutter actor (or its children), the rest of the actor's properties are ignored. This has the added effect that the Tweener library cannot be used for animation of these properties.
3 Finally, one can use Cairo to draw irregular shaped objects and paths on a Clutter actor, however, the reactive area for the actor (ie. mouse-enter and -leave events) will still be for the entire actor, not defined by the Cairo path.

Is it possible to get a "SCNVector3" position of a World object using CoreML and ARKit?

I am working on a AR based solution in which I am rendering some 3D models using SceneKit and ARKit. I have also integrated CoreML to identify objects and render corresponding 3D objects in scene.
But right now I am just rendering it in the center of screen as soon I detect the object(Only for the list of objects that I have). Is it possible to get the position of the real world object so that I can show some overlay above the object?
That is if I have a water bottled scanned, I should able to get the position of the water bottle. It could be anywhere in the water bottle but shouldn't go outside of it. Is this possible using SceneKit?
All parts of what you ask are theoretically possible, but a) for several parts, there’s no integrated API to do things for you, and b) you’re probably signing yourself up for a more difficult problem than you think.
What you presumably have with your Core ML integration is an image classifier, as that’s what most of the easy to find ML models do. Image classification answers one question: “what is this a picture of?”
What you’re looking for involves at least two additional questions:
“Given that this image has been classified as containing (some specific object), where in the 2D image is that object?”
“Given the position of a detected object in the 2D video image, where is it in the 3D space tracked by ARKit?”
Question 1 is pretty reasonable. There are models that do both classification and detection (location/bounds within an image) in the ML community. Probably the best known one is YOLO — here’s a blog post about using it with Core ML.
Question 2 is the “research team and five years” part. You’ll notice in the YOLO papers that it gives you only coarse bounding boxes for detected objects — that is, it’s working in 2D image space, not doing 3D scene reconstruction.
To really know the shape, or even the 3D bounding box of an object means integrating object detection with scene reconstruction. For example, if an object has some height in the 2D image, are you looking at a 3D object that’s tall with a small footprint, or one that’s long and low, receding into the distance? Such integration would require taking apart the inner workings of ARKit, which nobody outside Apple can do, or recreating an ARKit-alike from scratch.
There might be some assumptions you can make to get very rough estimates of 3D shape from a 2D bounding box, though. For example, if you do AR hit tests on the lower corners of a box and find that they’re on a horizontal plane, you can guess that the 2D height of the box is proportional to the 3D height of the object, and that its footprint on the plane is proportional to the box’s width. You’d have to do some research and testing to see if assumptions like that hold up, especially in whatever use cases your app covers.

How are continuous streams of projectiles created in programming? [Say: Gaming]

I'm creating a side-scroller video game for my final project in my grade 12 programming class. Right now I have nice delta-timer my partner made for me, a flying ship, asteroids, and a scrolling background. I've added a few basic things such as collision detection between asteroids and the ship, and ship movement. Now, my next steps are implementing random enemy spawns, and projectiles (laser beams :D) from both ships. Implementing random enemy spawns should fun and relatively easy, however I'm struggling with figuring out how I will create so many bullets that will fit on the screen. I need bullets from the enemy, and the ship (player controlled).
How can I achieve this? I know there are probably many answers to this question, but I would really like to see the types of approaches people have to this problem.
So far I have thought that:
a) I could make the game have a (say) 200 bullets max on the screen
b) I could make a dynamic array (I believe this term means the array gets bigger or smaller), that way I don't limit the amount of possible projectiles
...and then I'm afraid that all these bullets will cause lag from all the collision processing that will happen.
Please shed some light on this, and help guide me along the path to an efficient; well executed; side-scroller game.
Thanks,
Guest dude

How can I manipulate a RagDoll made with farseer physics in Silverlight?

I made a ragdoll similar to the one in this demo. This rag doll will be used for a turn based rpg game where the physics will be used for animations such as character taking damage, dying, falling down, etc.
What I am pondering at the moment is as how should go about this, should I stick the rag doll by the head to the background (leaving the body dangling) and basically throw around its body parts around as to simulate punching etc (as shown in Fig 1), or stiffen the joints and statically rotate and move the body parts for the actions taken(as shown in Fig 2), and when it comes to the character dying(or a similar action) just loosen the joints and let the rag doll fall down. Or is there a better way to go about doing this?
I am new to farseer physics and don't even know if what I mentioned is even possible or overwhelmingly hard to do.
Illustration http://img3.imageshack.us/img3/8681/charactermovementrg5.jpg
Please note that the red line in the figures represents the character's arm
Not sure that ragdolls are the way to go here, if you want animations. But if you do want to use them, I'd lock the feet to the floor and have some rotation springs in the joints so that when no forces are applied, the body stands upright. Then if it gets a hit, it'll kind of bend over, but should rebound to it's stand-up state afterwards (you may have to help it along the way back, e.g. apply some forces/torques until it's back where you want it).
For animations, such as the character punching, you could perhaps apply a spring joint (I think that's the name). Connect it to the fist and the destination, and the arm should automatically move there. You could do the same with a kick, just release the lock on that foot. However, I think it might be hard to get it to look right. On the other hand, it would look unique to other games, even though it might look kinda funny.
If you're skilled, you might wanna create an animation editor and save an animation as a sequence of forces and torques that need to be applied to limbs in order to get them to where and when they should be.
I think a better approach is to have an animated sprite played, rather than going through joint manipulation .
Maybe you can use some RotateTransform implementations to articulate arms and legs.
For sure animated sprites are the best, and painless, way of doing this.

Resources