I want to create a custom decay animation, that stops when a specific check returns true.
Right now my code looks like this:
this.decayAnimation = decay(
this.state.animatedXYValue,
{
velocity: { x: 0.5, y: 0.5 },
deceleration: 0.996,
}
);
this.decayAnimation.start();
But I want to stop animating, (or change the deceleration) if the x or y value
(not the velocity, but the actual x or y value of this.state.animatedXYValue : AnimatedValueXY)
becomes bigger than let's say 500.
Any ideas would be much appreciated.
Thank you.
Related
I'm making a simple game in swift 5 using SpriteKit that contains a ball, target and a barrier, The goal is to drag the barrier so the ball bounces off of it and hits the target. The ball is the only object that's supposed to have gravity. Everything was working fine until I wanted to change the code so it has an array of barrier objects so I can add more barriers but now when I run the code, the barrier immediately falls off so it has gravity. Here is the part of the code that adds the barrier.
fileprivate func addBarrier(at position: Point, width: Double, height: Double, angle: Double) {
// Add a barrier to the scene and make it immobile (it won't move when forces act on it)
let barrierPoints = [Point(x: 0, y: 0), Point(x: 0, y: height), Point(x: width, y: height), Point(x: width, y: 0)]
let barrier = PolygonShape(points: barrierPoints)
barriers.append(barrier)
barrier.position = position
barrier.isImmobile = true
barrier.hasPhysics = true
barrier.fillColor = .brown
barrier.angle = angle
scene.add(barrier)
}
The whole code is available at: https://github.com/Capslockhuh/BouncyBall
I'm investigating possibilities that Processing gives regarding generative art, and I stumbled upon a problem:
I'd like to generate multiple Bezier curves using a while loop. However, the program skips parts of some curves, while others are drawn properly.
Here's a working example:
void setup() {
size(1000,500);
background(#ffffff);
}
float[] i_x = {1,1};
float[] i_y = {1,1};
void draw() {
while (i_y[0] < height)
{
bezier(0,i_y[0],100,height-100,width - 100,height-100,width, i_y[0]);
i_y[0] = i_y[0] * 1.1;
}
save("bezier.jpg");
}
And here is the output. As you can see, only few of the curves are drawn in their full shape.
Also, when I draw one of the 'broken' curves out of the loop, it works fine.
I'd appreciate any help. I'm having good time learning coding concepts with visual output that Processing provides.
It works as intended. Look what happens when you change the background color (great post btw, the working example made it good enough for me to want to debug it!):
If you're clever, you'll notice that the "inside" of the curve has a color. Except that for now it's white. That's why only the topmost are "invisible": you're drawing them one after the other, starting topmost, so every new curve eats the last one by painting over it, but only "inside the curve". See what happens when I apply some color to differentiate the fill and the background better:
Now that the problem is obvious, here's the answer: transparency.
while (y < height)
{
fill(0, 0, 0, 0); // this is the important line, you can keep your algo for the rest
bezier(0, y, offset, height-offset, width - offset, height-offset, width, y);
y *= 1.1;
}
Which gives us this result:
Have fun!
I have the strict requirement to have a texture with resolution (let's say) of 512x512, always (even if the window is bigger, and SDL basically scales the texture for me, on rendering). This is because it's an emulator of a classic old computer assuming a fixed texture, I can't rewrite the code to adopt multiple texture sizes and/or texture ratios dynamically.
I use SDL_RenderSetLogicalSize() for the purpose I've described above.
Surely, when this is rendered into a window, I can get the mouse coordinates (window relative), and I can "scale" back to the texture position with getting the real window size (since the window can be resized).
However, there is a big problem here. As soon as window width:height ratio is not the same as the texture's ratio (for example in full screen mode, since the ratio of modern displays would not match of the ratio I want to use), there are "black bars" at the sides or top/bottom. Which is nice, since I want always the same texture ratio, fixed, and SDL does it for me, etc. However I cannot find a way to ask SDL where is my texture rendered exactly inside the window based on the fixed ratio I forced. Since I need the position within the texture only, and the exact texture origin is placed by SDL itself, not by me.
Surely, I can write some code to figure out how those "black bars" would change the origin of the texture, but I can hope there is a more simple and elegant way to "ask" SDL about this, since surely it has the code to position my texture somewhere, so I can re-use that information.
My very ugly (can be optimized, and floating point math can be avoided I think, but as the first try ...) solution:
static void get_mouse_texture_coords ( int x, int y )
{
int win_x_size, win_y_size;
SDL_GetWindowSize(sdl_win, &win_x_size, &win_y_size);
// I don't know if there is more sane way for this ...
// But we must figure out where is the texture within the window,
// which can be changed since the fixed ratio versus the window ratio (especially in full screen mode)
double aspect_tex = (double)SCREEN_W / (double)SCREEN_H;
double aspect_win = (double)win_x_size / (double)win_y_size;
if (aspect_win >= aspect_tex) {
// side ratio correction bars must be taken account
double zoom_factor = (double)win_y_size / (double)SCREEN_H;
int bar_size = win_x_size - (int)((double)SCREEN_W * zoom_factor);
mouse_x = (x - bar_size / 2) / zoom_factor;
mouse_y = y / zoom_factor;
} else {
// top-bottom ratio correction bars must be taken account
double zoom_factor = (double)win_x_size / (double)SCREEN_W;
int bar_size = win_y_size - (int)((double)SCREEN_H * zoom_factor);
mouse_x = x / zoom_factor;
mouse_y = (y - bar_size / 2) / zoom_factor;
}
}
Where SCREEN_W and SCREEN_H are the dimensions of the my texture, quite misleading by names, but anyway. Input parameters x and y are the window-relative mouse position (reported by SDL). mouse_x and mouse_y are the result, the texture based coordinates. This seems to work nicely. However, is there any sane solution or a better one?
The code which calls the function above is in my event handler loop (which I call regularly, of course), something like this:
void handle_sdl_events ( void ) {
SDL_Event event;
while (SDL_PollEvent(&event)) {
switch (event.type) {
case SDL_MOUSEMOTION:
get_mouse_texture_coords(event.motion.x, event.motion.y);
break;
[...]
I am trying to use LibGDX to make a simple game in which once i clicked on the screen the Texture "eggs" should change to the next one in line. Yet every time i touch the screen the app crashes
Texture[] eggs = new Texture[5];
}
#Override
public void render() {
if (Gdx.input.justTouched()) {
eggs[i] = new Texture(String.format("pic_%d.png", i++));
batch.begin();
batch.draw(eggs[i], Gdx.graphics.getWidth() / 2 - eggs[i].getWidth() / 2, Gdx.graphics.getHeight() / 2 - eggs[i].getHeight() / 2);
batch.end();
}
}}
eggs[i]=... gets evaluated first and gets a Texture object, i is incremented afterwards. Therefore taking it as the index actually refers to an uninitialized Texture element in your array.
I want to apply emboss and sketch effect on a bitmap with out losing its color.
I have applied color effects but no luck with emboss yet.
Any body have solution for it?
Please help
you can apply an hi-pass filter on image. This means replace the value of a pixel with the absolute difference between the pixel and the next pixel.
Something like this:
Bitmap img = new Bitmap(100, 100); //your image
Bitmap embossed = new Bitmap(img); //create a clone
for (int i = 0; i < img.Width; i++)
{
for (int j = 0; j < img.Height; j++)
{
Color c = img.GetPixel(i, j);
Color newColor;
if ((i == img.Width - 1))
{
newColor = Color.FromArgb(0, 0, 0);
}
else
{
Color next = img.GetPixel(x + 1, y);
newColor = Color.FromArgb(
Math.Abs((byte)((int)c.R - (int) next.R)),
Math.Abs((byte)((int)c.G - (int) next.G)),
Math.Abs((byte)((int)c.B - (int) next.B)));
}
embossed.SetPixel(i, j, newColor);
}
}
When you done with the gray emboss, you could set the alpha values of the image pixels according to the result of the emboss.
Consider it as two operations. Firstly, generate the grey embossed image (which you say you have achieved) then to make a coloured embossed image you perform a mix operation between between the original image and the embossed image. There is no single right choice for what form of operation to use. It comes down to what effect you are wishing to achieve.
if you work on the assumption that you have a colour image (R,G,B) and a grey emboss image (E) with each component being a byte. this gives you (for each pixel) 4 values in the range of 0..255;
since you probably want the dark areas of the emboss to show darker and the bright areas to show brighter, it's useful to have a centred grey level.
w=(E/128)-1; // convert 0..255 range to -1..1
Now where w is negative things should get darker and where w is positive w should get brighter.
outputR = R + (R*w);
outputG = G + (G*w);
outputB = B + (B*w);
This will give you Black where w is - 1. and double the brightness (R*2,G*2,B*2) when w is 1. That will produce a coloured embossed effect. Don't forget to limit the result to 0..255 though. If it goes higher cap it. if (x>255) x=255;
That should preserve the colours nicely but may not be exactly what you are after. If you want the white in the embossed image to be more than just doubled then you can try different formula.
outputR = R * (10**w); // w==0 has no effect
outputG = G * (10**w); // w==1 is 10 times brighter
outputB = B * (10**w); // w==-1 is 0.1 brightness
There are many many more possibilities.
you can convert the RGB to YUV and then apply the emboss change directly to the Y component. Then convert back to RGB.
The right choice is more of a matter of taste than optimally correct formula.