Drawing tiles with SDL_RenderFillRect() causes strange flickering/graphical bugs - c

In my game, there is a map (2D array) represented by tiles some of which are walls and each on of them I draw with SDL_RenderFillRect(), but for some reason when moving the camera (the position of each rect is determined by camera offset) I get light lines appearing on the screen (like seams between the tiles maybe) and something like tearing. I don't understand where this would come from, because I first render everything and SDL_RenderPresent() it once. These bugs are most noticeable on Android, perhaps because of the CPU/GPU being slower or something. I did try to enable Vsync/cap FPS, but that didn't help much.
Here is a piece of code where I draw the tiles (the entire source code can be found here):
//only iterate over walls that are on the screen
for (int i = g_camera.y / CELL_SIZE; i < (g_camera.y + g_camera.h + CELL_SIZE) / CELL_SIZE && i < g_map.height; i++) {
for (int j = g_camera.x / CELL_SIZE; j < (g_camera.x + g_camera.w + CELL_SIZE) / CELL_SIZE && j < g_map.width; j++) {
if (g_map.matrix[i][j] == MAP_WALL) {
SDL_Rect coords = {
j * CELL_SIZE - g_camera.x,
i * CELL_SIZE - g_camera.y,
CELL_SIZE,
CELL_SIZE
};
SDL_RenderFillRect(g_renderer, &coords);
}
else if (g_map.matrix[i][j] == MAP_FOOD) {
render_texture(g_leaf_texture, j * CELL_SIZE - g_camera.x, i * CELL_SIZE - g_camera.y);
}
}
}
This is what the result looks like, though it's very hard to capture the graphical bugs:

I'm far from an SDL2 expert, but I think you shouldn't draw it tile by tile. Because this means every tick of the program, SDL2 is drawing your whole map tile by tile which is a lot of function calls.
I think you want to draw to a preliminary texture. See the answer here : Fastest way to render a tiled map with SDL2

User keltar has provided the answer in the comments.
I set g_camera.x and g_camera.y in the callback function that runs in another thread, so its position was changing parallel to drawing the map. What I did is change the camera position in the function that draws the map instead (the main thread)

Related

Get texture coordinates of mouse position in SDL2?

I have the strict requirement to have a texture with resolution (let's say) of 512x512, always (even if the window is bigger, and SDL basically scales the texture for me, on rendering). This is because it's an emulator of a classic old computer assuming a fixed texture, I can't rewrite the code to adopt multiple texture sizes and/or texture ratios dynamically.
I use SDL_RenderSetLogicalSize() for the purpose I've described above.
Surely, when this is rendered into a window, I can get the mouse coordinates (window relative), and I can "scale" back to the texture position with getting the real window size (since the window can be resized).
However, there is a big problem here. As soon as window width:height ratio is not the same as the texture's ratio (for example in full screen mode, since the ratio of modern displays would not match of the ratio I want to use), there are "black bars" at the sides or top/bottom. Which is nice, since I want always the same texture ratio, fixed, and SDL does it for me, etc. However I cannot find a way to ask SDL where is my texture rendered exactly inside the window based on the fixed ratio I forced. Since I need the position within the texture only, and the exact texture origin is placed by SDL itself, not by me.
Surely, I can write some code to figure out how those "black bars" would change the origin of the texture, but I can hope there is a more simple and elegant way to "ask" SDL about this, since surely it has the code to position my texture somewhere, so I can re-use that information.
My very ugly (can be optimized, and floating point math can be avoided I think, but as the first try ...) solution:
static void get_mouse_texture_coords ( int x, int y )
{
int win_x_size, win_y_size;
SDL_GetWindowSize(sdl_win, &win_x_size, &win_y_size);
// I don't know if there is more sane way for this ...
// But we must figure out where is the texture within the window,
// which can be changed since the fixed ratio versus the window ratio (especially in full screen mode)
double aspect_tex = (double)SCREEN_W / (double)SCREEN_H;
double aspect_win = (double)win_x_size / (double)win_y_size;
if (aspect_win >= aspect_tex) {
// side ratio correction bars must be taken account
double zoom_factor = (double)win_y_size / (double)SCREEN_H;
int bar_size = win_x_size - (int)((double)SCREEN_W * zoom_factor);
mouse_x = (x - bar_size / 2) / zoom_factor;
mouse_y = y / zoom_factor;
} else {
// top-bottom ratio correction bars must be taken account
double zoom_factor = (double)win_x_size / (double)SCREEN_W;
int bar_size = win_y_size - (int)((double)SCREEN_H * zoom_factor);
mouse_x = x / zoom_factor;
mouse_y = (y - bar_size / 2) / zoom_factor;
}
}
Where SCREEN_W and SCREEN_H are the dimensions of the my texture, quite misleading by names, but anyway. Input parameters x and y are the window-relative mouse position (reported by SDL). mouse_x and mouse_y are the result, the texture based coordinates. This seems to work nicely. However, is there any sane solution or a better one?
The code which calls the function above is in my event handler loop (which I call regularly, of course), something like this:
void handle_sdl_events ( void ) {
SDL_Event event;
while (SDL_PollEvent(&event)) {
switch (event.type) {
case SDL_MOUSEMOTION:
get_mouse_texture_coords(event.motion.x, event.motion.y);
break;
[...]

Rotating a group of objects around their center in JSModeler

I am using JSModeler with three.js extension. My viewer is JSM.ThreeViewer(). The codepen is: https://codepen.io/Dharnidharka/pen/QRzBQa
What I am trying to achieve is a rotation of the concentric circles around their center, but currently the rotation is about the world center.
In three.js, this could be done by having a parent Object3D, and adding all the meshes to that object, centering that object by using geometry.center() and then having the rotation. But I could not find an Object3D extension for JSModeler.
Another way in three.js could be to group objects around a common pivot but even that approach did not work for me.
A third approach I tried was the solution in What's the right way to rotate an object around a point in three.js?, in the Update() loop but that doesn't work as well
I used the following to move the object to the pivot point:
meshes = JSM.ConvertModelToThreeMeshes (model);
viewer.AddMeshes (meshes);
for(var i =3; i<10; i++) {
meshes[i].geometry.computeBoundingBox();
meshes[i].geometry.boundingBox.getCenter(center);
meshes[i].geometry.center();
var newPos = new THREE.Vector3(-center.x, -center.y, -center.z);
meshes[i].position.copy(newPos);
}
The expected output is that the 2 circles are rotating about the common center, which also would be the world center. Currently, they are rotating about the world center, but not about their common center.
Finally figured it out.
In the code posted above, I was centering the geometry and then moving them back to their original positions. What was needed was to center the geometry and then move the meshes around the center relative to each other so that the configuration did not mess up. There are 2 mesh groups. The mid has to be computed for each and then the geometry translated accordingly. The solution in codes:
var center = new THREE.Vector3();
var mid1 = new THREE.Vector3();
for(var i =0; i<len1; i++) {
viewer.GetMesh(i).geometry.computeBoundingBox();
viewer.GetMesh(i).geometry.boundingBox.getCenter(center);
viewer.GetMesh(i).geometry.verticesNeedUpdate = true;
mid1.x += center.x;
mid1.y += center.y;
mid1.z += center.z;
}
mid1.x = mid1.x / len1;
mid1.y = mid1.y / len1;
mid1.z = mid1.z / len1;
for(var i = 0; i<len1; i++) {
viewer.GetMesh(i).geometry.computeBoundingBox();
viewer.GetMesh(i).geometry.boundingBox.getCenter(center);
viewer.GetMesh(i).geometry.center();
var newPos = new THREE.Vector3(center.x - mid1.x, center.y - mid1.y, center.z - mid1.z);
viewer.GetMesh(i).geometry.translate(newPos.x, newPos.y, newPos.z);
}

How to fill Oxyplot AreaSeries in different areas?

I would like to create a AreaSeries series in Oxyplot that consists of a set of intervals.
See picture of what I would like to achieve:
AreaSeries
This is the code I use to fill in an area of the chart. The series is composed of 2 linear axis.
seriesArea = new AreaSeries { Title = "2Hz" };
for (var j = 0; j < 30; j++)
{
//Draw a vertical line from a specific value.
//This represents the starting point of the area series.
//eg: triggers[0]=0; triggers[1]=236 first area (see picture)
seriesArea.Points.Add(new DataPoint(triggers[0], j));
}
for (var j = 0; j < 30; j++)
{
//This time the ending point of the area series.
seriesArea.Points.Add(new DataPoint(triggers[1], j));
}
seriesArea.Color = OxyColors.LightPink;
seriesArea.Fill = OxyColors.LightPink;
plotModel.Series.Add(seriesArea);
I cannot seem to figure out what I have to do in order to draw the same series and full only the intervals that I want. If I use the above code, the series fills up also the areas that I would like to leave blank.
I tried to use seriesArea.Points2.Add(new DataPoint(i, j)); in places where I want it to not draw and draw the seriesArea.Points2 transparent, but it seem to not work as I expected.
Please let me know if there is a way that I could draw different intervals in a Oxyplot chart with the same series.

Emboss a Bitmap without losing color in c# Programatically

I want to apply emboss and sketch effect on a bitmap with out losing its color.
I have applied color effects but no luck with emboss yet.
Any body have solution for it?
Please help
you can apply an hi-pass filter on image. This means replace the value of a pixel with the absolute difference between the pixel and the next pixel.
Something like this:
Bitmap img = new Bitmap(100, 100); //your image
Bitmap embossed = new Bitmap(img); //create a clone
for (int i = 0; i < img.Width; i++)
{
for (int j = 0; j < img.Height; j++)
{
Color c = img.GetPixel(i, j);
Color newColor;
if ((i == img.Width - 1))
{
newColor = Color.FromArgb(0, 0, 0);
}
else
{
Color next = img.GetPixel(x + 1, y);
newColor = Color.FromArgb(
Math.Abs((byte)((int)c.R - (int) next.R)),
Math.Abs((byte)((int)c.G - (int) next.G)),
Math.Abs((byte)((int)c.B - (int) next.B)));
}
embossed.SetPixel(i, j, newColor);
}
}
When you done with the gray emboss, you could set the alpha values of the image pixels according to the result of the emboss.
Consider it as two operations. Firstly, generate the grey embossed image (which you say you have achieved) then to make a coloured embossed image you perform a mix operation between between the original image and the embossed image. There is no single right choice for what form of operation to use. It comes down to what effect you are wishing to achieve.
if you work on the assumption that you have a colour image (R,G,B) and a grey emboss image (E) with each component being a byte. this gives you (for each pixel) 4 values in the range of 0..255;
since you probably want the dark areas of the emboss to show darker and the bright areas to show brighter, it's useful to have a centred grey level.
w=(E/128)-1; // convert 0..255 range to -1..1
Now where w is negative things should get darker and where w is positive w should get brighter.
outputR = R + (R*w);
outputG = G + (G*w);
outputB = B + (B*w);
This will give you Black where w is - 1. and double the brightness (R*2,G*2,B*2) when w is 1. That will produce a coloured embossed effect. Don't forget to limit the result to 0..255 though. If it goes higher cap it. if (x>255) x=255;
That should preserve the colours nicely but may not be exactly what you are after. If you want the white in the embossed image to be more than just doubled then you can try different formula.
outputR = R * (10**w); // w==0 has no effect
outputG = G * (10**w); // w==1 is 10 times brighter
outputB = B * (10**w); // w==-1 is 0.1 brightness
There are many many more possibilities.
you can convert the RGB to YUV and then apply the emboss change directly to the Y component. Then convert back to RGB.
The right choice is more of a matter of taste than optimally correct formula.

Trying to get a mouse-look camera working in OpenGL on Mac OSX

I've been working on a demo in OpenGL and I've been trying to implement an fps-like mouse-look camera. I've been using Max OSX Leopard, so I've had to use Carbon to get the screen coordinates and return the mouse to the centre of the screen after movement, which works fine most of the time. Below is the related code from my mouse method:
CGPoint pnt;
pnt.x = glutGet(GLUT_WINDOW_WIDTH)/2 + glutGet(GLUT_WINDOW_X);
pnt.y = glutGet(GLUT_WINDOW_HEIGHT)/2 + glutGet(GLUT_WINDOW_Y);
int diffX;
int diffY;
CGGetLastMouseDelta(&diffX, &diffY);
if (diffX == 0 && diffY == 0) return;
if ((diffX) > 0)
angle += (diffX)/5;
else if ((diffX) < 0)
angle += (diffX)/5;
if ((diffY) > 0 && pitch < 90)
pitch += (diffY)/5;
else if ((diffY) < 0 && pitch > -70)
pitch += (diffY)/5;
CGDisplayMoveCursorToPoint(0, pnt);
The problem is annoyingly simple: The first time CGGetLastMouseDelta is called, it returns the difference between the mouse position before the program started and the centre of the window. This means that when the program begins, the camera is facing the right way as it should be, but as soon as I touch the mouse it jumps to a different position.
I've got another call to centre the cursor inside a function to initialise everything, shown below:
CGPoint pnt;
pnt.x = glutGet(GLUT_WINDOW_WIDTH)/2 + glutGet(GLUT_WINDOW_X);
pnt.y = glutGet(GLUT_WINDOW_HEIGHT)/2 + glutGet(GLUT_WINDOW_Y);
CGDisplayMoveCursorToPoint(0, pnt);
I know very little about Carbon, and have been searching like mad to find an answer, but to no avail. Is there anything else I should be doing to avoid this jumping?
CoreGraphics is not a Carbon API, so there's a good chance you're looking in the wrong place. Try using CGAssociateMouseAndMouseCursorPosition.

Resources