Determine 'landscape' and 'portrait' orientation with Arduino - mobile

How does a mobile device determine if it is in landscape or portrait mode?
And would it be possible to replicate the same functionality on an Arduino Board given that the same sensors would be available?
For example, the board pictured below would be landscape and if rotated by 90deg it would be portrait.

This article describes it in all detail https://www.safaribooksonline.com/library/view/basic-sensors-in/9781449309480/ch04.html
The relevant code snippet is provided in Objective-C, but can easily be translated in whatever you need:
float x = -[acceleration x];
float y = [acceleration y];
float angle = atan2(y, x);
if(angle >= −2.25 && angle <= −0.75) {
//OrientationPortrait
} else if(angle >= −0.75 && angle <= 0.75){
//OrientationLandscapeRight
} else if(angle >= 0.75 && angle <= 2.25) {
//OrientationPortraitUpsideDown
} else if(angle <= −2.25 || angle >= 2.25) {
//OrientationLandscapeLeft];
}
Explanation: For any real arguments x and y that are not both equal to zero, atan2(y, x) is the angle in radians between the positive x-axis of a plane and the point given by the specified coordinates on it. The angle is positive for counter-clockwise angles, and negative for clockwise angles.

Related

Using Modulo on Screen Space Coordinates in GLSL Not Producing Expected Result

I suppose this is more of a math question than anything.
Here is a basic shader:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fragCoord/iResolution.xy;
// Time varying pixel color
vec3 col = 0.5 + 0.5*cos(iTime+uv.xyx+vec3(0,2,4));
if(uv.x < .5) col = vec3(0.0,0.0,0.0);
// Output to screen
fragColor = vec4(col,1.0);
}
First we are normalizing our X coordinates between (0.0,0.1) with 0.0 being the far left of the screen and 1.0 being the far right. By turning all pixels with x coordinates < .5 black, I am simply masking half the screen in black. This results in the following:
If I use screen space coordinates I can achieve a similar result, the width of the actual screen is 800 pixels. So I can mask every pixel with an x < 400 with black by doing the following:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fragCoord/iResolution.xy;
// Time varying pixel color
vec3 col = 0.5 + 0.5*cos(iTime+uv.xyx+vec3(0,2,4));
if(fragCoord.x < 400.) col = vec3(0.0,0.0,0.0);
// Output to screen
fragColor = vec4(col,1.0);
}
Which results in the same:
Logically then, I should be able to use Modulo on the screen space coordinates to create stripes. By taking mod(fragCoord.x,10.0) and checking where the result is 0.0 I should be disabling any row of pixels where its x value is a factor of 10.
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fragCoord/iResolution.xy;
// Time varying pixel color
vec3 col = 0.5 + 0.5*cos(iTime+uv.xyx+vec3(0,2,4));
if(mod(fragCoord.x, 10.0) == 0.0) col = vec3(0.0,0.0,0.0);
// Output to screen
fragColor = vec4(col,1.0);
}
However, what I expect isn't happening:
Can somebody explain why I am not seeing rows of black pixels wherever x%10 == 0?
I assume fragCoord is set by gl_FragCoord.
mod is a floating point operation and the values of gl_FragCoord are not integral. See Khronos OpenGL reference:
By default, gl_FragCoord assumes a lower-left origin for window coordinates and assumes pixel centers are located at half-pixel centers. For example, the (x, y) location (0.5, 0.5) is returned for the lower-left-most pixel in a window.
Therefore the result of the modulo operation will never be 0.0. Convert fragCoord.x to an integral value and use the % operator:
if(mod(fragCoord.x, 10.0) == 0.0) col = vec3(0.0,0.0,0.0);
if (int(fragCoord.x) % 10 == 0) col = vec3(0.0);
In case anybody wants to see the result of Rabbid76's answer
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fragCoord/iResolution.xy;
// Time varying pixel color
vec3 col = 0.5 + 0.5*cos(iTime+uv.xyx+vec3(0,2,4));
if (int(fragCoord.x) % 10 == 0) col = vec3(0.0);
// Output to screen
fragColor = vec4(col,1.0);
}

Moving sprites in sine wave

can someone slap me an idea or math formula on how to make my enemies move in a sine wave
tried something like this but they just move at the same time so they just create a straight line of enemies moving left and right.
for(int i = 0; i < 5; i++){
float y = sinf( 100+delta_time*0.06f) * 75;
float x = game->enemy[i].base_x + y;
game->enemy[i].x = x ;
game->enemy[i].y += 1;
SDL_Rect rect = { game->enemy[i].x , game->enemy[i].y ,game->enemy[i].w, game->enemy[i].h};
SDL_RenderCopy(game->renderer , game->enemy[i].sprite , NULL , &rect);
}
Let v=(v_x,v_y) be the overall direction of the enemy. Let o be the vector o=(-v_y/||v||,v_x/||v||) where ||v||=sqrt(v_x.v_x+v_y.v_y) is the norm of v. The vector o is perpendicular to v. A sinusoidal motion is wanted in that direction. Consequently, the position p(t)=(x(t),y(t)) is defined as :
x(t)=v_x.t-A.v_y/||v||.sin(w.t)
y(t)=v_y.t+A.v_x/||v||.sin(w.t)
where A is the magnitude of the ossilations and w the pulsation of the ossilations. The corresponding frequency is f=w/(2pi).Then, the wavelength lambda=||v||/f corresponds to the length of ossilations.
If the enemy is moving in the x direction (v_y=0) then :
x(t)=v_x.t
y(t)=A.sin(w.t)
The length of ossilations is lambda=2pi.v_x/w.

Detecting collision with sprites made of multiple pixel widths and heights

Context: Developing a small game on a microprocessor displayed on an LCD screen.
I'm trying to fix this collision detection function, what it does is it detects collision between a wall sprite (1 x 25 pixels) and a player sprite (3x3 pixels). It returns 1 or 0, if 1 the player sprite's dx/dy changes so it stops moving. So essentially the wall sprite is treated as a real wall.
int wall_collision(Sprite *w_sprite)
{
if (((w_sprite->x >= wall_sprite.x) && ((w_sprite->x - wall_sprite.x) < 3)) && ((w_sprite->y >= wall_sprite.y) && ((w_sprite->y - wall_sprite.y) < 3)))
return 1;
if (((w_sprite->x <= wall_sprite.x) && ((w_sprite->x - wall_sprite.x) > -3)) && ((w_sprite->y >= wall_sprite.y) && ((w_sprite->y - wall_sprite.y) < 3)))
return 1;
if (((w_sprite->x >= wall_sprite.x) && ((w_sprite->x - wall_sprite.x) < 3)) && ((w_sprite->y <= wall_sprite.y) && ((w_sprite->y - wall_sprite.y) > -3)))
return 1;
if (((w_sprite->x <= wall_sprite.x) && ((w_sprite->x - wall_sprite.x) > -3)) && ((w_sprite->y <= wall_sprite.y) && ((w_sprite->y - wall_sprite.y) > -3)))
return 1;
return 0;
}
My main issue is specifying the exact number the sprite should be equal/greater than/lesser than to, so as you can see in the example, it's 3 or -3. When I take numbers out, it returns 1 and stops the sprite regardless of where is, because the sprite is technically still on the same x or y axis as the wall, but it's not proximity wise, touching the wall. What are the correct size parameters for this?
Case problem: My sprite should only stop when it's directly touching the wall, currently it either passes through the wall, or stops when not even close to the wall.
First of all, you're code seems a little complex - below is canonical (or so I believe) method of detecting collisions. With this function, instead of having to check each collision manually, we can detect any collision between colliders A and B. Keep in mind, the collider struct used in this would have to contain information on the top, bottom, left and right co-ordinates on each collider. You can then store all colliders in an array and then index through them to check for collisions. The function:
int collisionFunction(collider * A, collider * B){
//Check to see if the colliders are "lined up" on the X-axis
if( (A->right > B->left ) && (B->right > A->left) ){
//Check to see if the colliders are also "lined up" on the Y-axis
if( (A->top < B->bottom) && (B->top < A->bottom) ){
return 1; // COLLISION DETECTED
}
}
return 0;//NO COLLISION DETECTED
}
Explanation of the function/algorithm: First of all, we check to see if A and B are "lined up" on the x axis. By "lined up", I mean to say that two colliders could be colliding based off their position on the X-axis. Then, we check to see if each collider is lined up on the Y-axis. If both conditions are met, then the colliders are colliding. It can be a little hard to grasp this at first so I suggest you trace this by drawing out shapes on paper (some colliding, others not) and see whether the algorithm says they're colliding or not. This algorithm will work for coordinate systems where the origin (i.e. (0,0) ) is in the top left corner of the screen, which is the convention for 2D graphics.
Keep in mind that your player would go through the wall partially when using this algorithm - this is very common in 2D games. But, given the number of pixels you're using, this could obviously be a problem. Therefore, you should take that into account when implementing this algorithm.
How about this?
int wall_collision(Sprite *w_sprite)
{
if(w_sprite->left >= wall_sprite->right) return 0;
if(w_sprite->right <= wall_sprite->left) return 0;
if(w_sprite->top >= wall_sprite->bottom) return 0;
if(w_sprite->bottom <= wall_sprite->top) return 0;
return 1;
}
Left/Right/Top/Bottom could be values or functions, or just replace them with the actual values. The "left" and "top" would be the same as the x/y value of the sprite or wall. The "right" and "bottom" would be the x/y + the width/height in pixels of the sprite or wall, respectively.
Take a look at this link for a more in-depth tutorial on simple collision detection: http://lazyfoo.net/SDL_tutorials/lesson17/index.php
EDIT: The example code assumes a coordinate system where x increases as you go right, and y increases as you go down.

Length of the intercept from intersection of a line with a cylinder (ring)

I have some sources with coordinates (xn, yn, zn) w.r.t a center C of a ring and unit vectors (ns_ux, ns_uy, ns_uz) along my line of sight. I want to calculate whether these sources pass through a cylinder of inner and outer radius 9.5 and 10.5 units, respectively. If they intersect this cylinder (or I call it ring, sometimes), then I would like to calculate the length of this intercept. My position is outside of this ring and there are sources which lie beyond the center C on the other side. These sources, therefore will pass through this ring twice. This picture should help visualize this problem.
#define PI 3.142
int main(){
int k,number=200;
float r_min=9.50000;
float r_max=10.500000;
float step=0.3;
float z_c = 3.0;
float ns_ux[number],ns_uy[number],ns_uz[number],xn[number], yn[number],zn[number],l[number],b[number],ns[number],x_comp,y_comp,z_comp,radial;
FILE* val= NULL;
val=fopen("novae_uniform_unitvectors.txt", "r");
for(k=0;k<=(number-1);k++){
fscanf(val,"%f %f %f %f %f %f %f %f %f", &xn[k], &yn[k], &zn[k], &ns_ux[k], &ns_uy[k], &ns_uz[k], &l[k], &b[k], &ns[k]);
float u=0.;
for (u=0.;u<=30.;u=u+step){
x_comp=xn[k]+u*ns_ux[k];
vector addition : calculating the x_comp w.r.t the center C when stepped by 'u' units along my l.o.s.
y_comp=yn[k]+u*ns_uy[k];
radial=pow((x_comp*x_comp+y_comp*y_comp),0.5);
if (radial >=r_min && radial <=r_max){
z_comp=zn[k]+u*ns_uz[k];
checking if the height is consistent with the ring's height
if(z_comp >=-z_c && z_comp <= z_c)
printf("%f\t%f\t%f\t%f\n",l[k],u, z_comp, radial);
}
}
}
return 0.;
}
This 'radial' values gives me a list of points where my line of sight intersects with the ring. But, I require only the end points to calculate the length of the intercept on the ring.
e.g. in the case listed below, my l.o.s. passes through the ring at I and then comes off at II. Then it keeps going until it hits the ring again at III and then come out of it at IV. I need to store only I, II , III and IV points in my file. How would I be able to do it ?
longitude..........u........ z_comp........radial
121.890999 0.100000 0.016025 9.561846 I
121.890999 0.200000 0.038453 9.538050
121.890999 0.300000 0.060881 9.515191 II
121.890999 4.799998 1.070159 9.518372 III
121.890999 4.899998 1.092587 9.541364
121.890999 4.999998 1.115016 9.565292
...... skipping to save space........
121.890999 7.399995 1.653297 10.400277
121.890999 7.499995 1.675725 10.444989
121.890999 7.599995 1.698153 10.490416 IV
Figured out a way to store only the final and initial values by using a boolean operator as follows (continued from the code in the question) :
define bool change = true;
...(rest of the program)...
if(radial >= r_min && radial <= r_max) {
z_comp = zn[k] + u * ns_uz[k];
if (z_comp >= -z_c && z_comp <= z_c)
if (change) {
printf("%f\t%f\t%f\t%f\t", l[k], b[k], ns[k], radial[i]);
change = !change;
}
} else { // if the condition of radial and z_comp is not met
if (!change) {
fprintf(fp, "%f\n", radial[i - 1]);
change = !change;
}
}
This would store only the first and the last values of the radial component (i.e. the end points of the intercept of the line of sight vector on the ring)

OpenGL - Mapping between x and y in glVertex2f(x, y) to screen integer coordinates

I would like to know how the vertices of glVertex2f(x, y) map to actual screen integer co-ordinates.
I intend to use a complex plane with minR, minI and maxR, maxI (I and R - Imaginary and Real part), such that the plane gets mapped to 512 x 512 pixels on the screen. I have points of 512 steps between the min and max values.
The mapping between the vertices is unclear since, I had to scale the my planar image using glScalef(100, 100, 0) to get it roughly fit the screen. But still, a large portion of it is left blank.
Please note that I am using the glBegin(GL_POINTS) routine to map the points in the plane to the screen.
The code looks thus,
for (X = 0; X < 512; X++)
for (Y = 0; Y < 512; Y++)
glVertex2f (Complexplane[X][Y].real, Complexplane[X][Y].imag);
P.S.:
Complexplane[0][0].real = -2, Complexplane[0][0].imag = -1.2
Complexplane[511][511].real = 1.0, Complexplane[0][0].imag = 1.8
I'm assuming you haven't set the projection or modelview matrices - they will be set to the identity matrix by default BTW...
For X,Y coordinates, a point will be visible if: -1 <= X <= 1, -1 <= Y <= 1
The glViewport function describes how this range is mapped to the window. It is initially set to (0, 0, window_width, window_height) when the GL context is created. The fact that glScale(100, 100, 0) is only taking up a portion of the window suggests that you are applying another transform elsewhere.
The mapping depends on the transformation matrices set. In up to OpenGL-2 the pipeline is
v_eye = ModelviewMatrix * v
v_projected = ProjectionMatrix * v_eye
v_clipped = clip(v_projected)
v_NDC.xyzw = v_clipped.xyzw / v_clipped.w
The default matrices are identity, so the only operation applied in the default state is the clipping. v_NDC then undergoes the viewport transform:
p.xyz = (v_NDC.xyz + 1) * viewport.wh / 2 + viewport.xy

Resources