I have the strict requirement to have a texture with resolution (let's say) of 512x512, always (even if the window is bigger, and SDL basically scales the texture for me, on rendering). This is because it's an emulator of a classic old computer assuming a fixed texture, I can't rewrite the code to adopt multiple texture sizes and/or texture ratios dynamically.
I use SDL_RenderSetLogicalSize() for the purpose I've described above.
Surely, when this is rendered into a window, I can get the mouse coordinates (window relative), and I can "scale" back to the texture position with getting the real window size (since the window can be resized).
However, there is a big problem here. As soon as window width:height ratio is not the same as the texture's ratio (for example in full screen mode, since the ratio of modern displays would not match of the ratio I want to use), there are "black bars" at the sides or top/bottom. Which is nice, since I want always the same texture ratio, fixed, and SDL does it for me, etc. However I cannot find a way to ask SDL where is my texture rendered exactly inside the window based on the fixed ratio I forced. Since I need the position within the texture only, and the exact texture origin is placed by SDL itself, not by me.
Surely, I can write some code to figure out how those "black bars" would change the origin of the texture, but I can hope there is a more simple and elegant way to "ask" SDL about this, since surely it has the code to position my texture somewhere, so I can re-use that information.
My very ugly (can be optimized, and floating point math can be avoided I think, but as the first try ...) solution:
static void get_mouse_texture_coords ( int x, int y )
{
int win_x_size, win_y_size;
SDL_GetWindowSize(sdl_win, &win_x_size, &win_y_size);
// I don't know if there is more sane way for this ...
// But we must figure out where is the texture within the window,
// which can be changed since the fixed ratio versus the window ratio (especially in full screen mode)
double aspect_tex = (double)SCREEN_W / (double)SCREEN_H;
double aspect_win = (double)win_x_size / (double)win_y_size;
if (aspect_win >= aspect_tex) {
// side ratio correction bars must be taken account
double zoom_factor = (double)win_y_size / (double)SCREEN_H;
int bar_size = win_x_size - (int)((double)SCREEN_W * zoom_factor);
mouse_x = (x - bar_size / 2) / zoom_factor;
mouse_y = y / zoom_factor;
} else {
// top-bottom ratio correction bars must be taken account
double zoom_factor = (double)win_x_size / (double)SCREEN_W;
int bar_size = win_y_size - (int)((double)SCREEN_H * zoom_factor);
mouse_x = x / zoom_factor;
mouse_y = (y - bar_size / 2) / zoom_factor;
}
}
Where SCREEN_W and SCREEN_H are the dimensions of the my texture, quite misleading by names, but anyway. Input parameters x and y are the window-relative mouse position (reported by SDL). mouse_x and mouse_y are the result, the texture based coordinates. This seems to work nicely. However, is there any sane solution or a better one?
The code which calls the function above is in my event handler loop (which I call regularly, of course), something like this:
void handle_sdl_events ( void ) {
SDL_Event event;
while (SDL_PollEvent(&event)) {
switch (event.type) {
case SDL_MOUSEMOTION:
get_mouse_texture_coords(event.motion.x, event.motion.y);
break;
[...]
Related
I'm making a project in C, with Allegro 5 library. I got some skins for my snake, but I want to make something better, like user-styled skin. And I think the best way will be:
Make a white snake as that in image in post.
Color white to another color.
And now I'm looking for Allegro function to make it fast and easy.
What I got:
al_tinted_bitmap - it's also coloring eyes and tongue
al_get_pixel, put_pixel - but I must give x/y of white color, and that's really hard with bitmap like that (i mean... with square it will be easier)
So I think, I need a function or something like "get_pixel", but with specifying rgb(R, G, B), not X/Y.
What's should I try used to make it functional like that?
bitmap of snake
This question led to a nice discussion on the allegro forums Credits to Dizzy Egg for working out this example in full detail. I'll summarize the answer here for future reference.
A solution with al_tinted_bitmap is definitely possible, but I'd choose the get_pixel / put_pixel approach as you have a bit more control. It's not hard to replace pixels of one color with pixels of another color. Just loop over each pixel in the bitmap and do this:
color = al_get_pixel(snake_white, x, y);
if(color.r == 1.0 && color.g == 1.0 && color.b == 1.0) {
al_put_pixel(x, y, al_map_rgb(50, 255, 50)); //Choose your own colour!
}
else {
al_put_pixel(x, y, color);
}
The problem in this case is that you don't have just one color (white) but also shades of grey, due to anti-aliasing on the outlines of the snake. So what we can do instead, is check each pixel if the r, g, and b component are all non-zero. If they're all non-zero, then it's neither black, nor red (the tongue) nor yellow (the eyes). Then we multiply the grey-values with r,g,b components of our own choosing.
color = al_get_pixel(snake_white, x, y);
if(color.r >= 0.01 && color.g >= 0.01 && color.b >= 0.01) {
al_put_pixel(x, y, al_map_rgba_f(new_color.r * color.r, new_color.g * color.g, new_color.b * color.b, color.a));
}
else {
al_put_pixel(x, y, color);
}
You could make it smarter in a lot of ways, but it works well for the snake image. See the linked forum thread for the full code.
I'm investigating possibilities that Processing gives regarding generative art, and I stumbled upon a problem:
I'd like to generate multiple Bezier curves using a while loop. However, the program skips parts of some curves, while others are drawn properly.
Here's a working example:
void setup() {
size(1000,500);
background(#ffffff);
}
float[] i_x = {1,1};
float[] i_y = {1,1};
void draw() {
while (i_y[0] < height)
{
bezier(0,i_y[0],100,height-100,width - 100,height-100,width, i_y[0]);
i_y[0] = i_y[0] * 1.1;
}
save("bezier.jpg");
}
And here is the output. As you can see, only few of the curves are drawn in their full shape.
Also, when I draw one of the 'broken' curves out of the loop, it works fine.
I'd appreciate any help. I'm having good time learning coding concepts with visual output that Processing provides.
It works as intended. Look what happens when you change the background color (great post btw, the working example made it good enough for me to want to debug it!):
If you're clever, you'll notice that the "inside" of the curve has a color. Except that for now it's white. That's why only the topmost are "invisible": you're drawing them one after the other, starting topmost, so every new curve eats the last one by painting over it, but only "inside the curve". See what happens when I apply some color to differentiate the fill and the background better:
Now that the problem is obvious, here's the answer: transparency.
while (y < height)
{
fill(0, 0, 0, 0); // this is the important line, you can keep your algo for the rest
bezier(0, y, offset, height-offset, width - offset, height-offset, width, y);
y *= 1.1;
}
Which gives us this result:
Have fun!
I have to draw a circle for several images. For each image to the radius of curvature is different with a constant center.
The problem is : no matter how big the circle is it shouldn't cross to upper half of the image. It's OK if it becomes invisible or only a part of it is visible in the lower half.
I am using OpenCV 2.4.4 in C lang.
The values for the circle is found by:
for(angle1 = 0; angle1<360; angle1++)
{
x [angle1]= r * sin(angle1) + axis_x;
y [angle1]= r * cos(angle1) + axis_y;
}
FYI:
cvCircle( img,center_circle, r,cvScalar( 0, 0, 255,0 ),2,8,0);
Draws circle in the entire image. Which I don't want to happen.
How can I do it? Rem: no part of the circle should appear in upper half of the image.
And the code should be in OpenCV's C lang.
In MALTAB is pretty easy. I only have to select the pixels and map them on the image.
I am new to OpenCV and operations like img->data.i/f/s/db[50] =50; is showing error.
A pretty naive approach is to create a copy of the upper half of image, draw the complete circle, and then copy back the upper half to original image. This may not be the best approach but it works. Here is how it can be achieved:
void drawCircleLowerHalf(IplImage* image, CvPoint center, int radius, CvScalar color, int thickness, int line_type, int shift)
{
CvRect roi = cvRect(0,0,image->width, image->height/2);
IplImage* upperHalf = cvCreateImage(cvSize(image->width, image->height/2), image->depth, image->nChannels);
cvSetImageROI(image, roi);
cvCopy(image,upperHalf);
cvResetImageROI(image);
cvCircle(image, center, radius, color, thickness, line_type, shift);
cvSetImageROI(image, roi);
cvCopy(upperHalf, image);
cvResetImageROI(image);
cvReleaseImage(&upperHalf);
}
Just call this function with the same arguments as of cvCircle.
I have a WPF Canvas that I want to make a bitmap of.
Specifically, I want to render it actual size on a 300dpi bitmap.
The "actual size" of the objects on the canvas is 10 device independent pixels = 1" in real life.
Theoretically, WPF device independent pixels are 96dpi.
I've spent days trying to get this to work and am coming up flummoxed.
My understanding is that the general procedure is roughly:
var masterBitmap = new RenderTargetBitmap((int)(canvas.ActualWidth * ?SomeFactor?),
(int)(canvas.ActualHeight * ?SomeFactor?),
BitmapDpi, BitmapDpi, PixelFormats.Default);
masterBitmap.Render(canvas);
and that I need to set the canvas's LayoutTransform to a ScaleTransform of ?SomeOtherFactor? and then do a measure and arrange of the canvas to ?SomeDesiredSize?
What I am stuck on is what to use for the values of ?SomeFactor?, ?SomeOtherFactor? and ?SomeDesiredSize? to make this work. MSDN documentation gives no indication of what factors to use.
I use this code to display images with 1:1 pixel accuracy.
double dpiXFactor, dpiYFactor;
Matrix m = PresentationSource.FromVisual(Application.Current.MainWindow).CompositionTarget.TransformToDevice;
if (m.M11 > 0 && m.M22 > 0)
{
dpiXFactor = m.M11;
dpiYFactor = m.M22;
}
else
{
// Sometimes this can return a matrix with 0s.
// Fall back to assuming normal DPI in this case.
dpiXFactor = 1;
dpiYFactor = 1;
}
double width = widthPixels / dpiXFactor;
double height = heightPixels / dpiYFactor;
Don't forget to enable UseLayoutRounding on the control as well.
I am drawing a polygon in a square window. When I resize the window, for instance by fullscreening, the aspect ratio is disturbed. From a reference I found one way of preserving the aspect ratio. Here is the code:
void reshape (int width, int height) {
float cx, halfWidth = width*0.5f;
float aspect = (float)width/(float)height;
glViewport (0, 0, (GLsizei) width, (GLsizei) height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(cx-halfWidth*aspect, cx+halfWidth*aspect, bottom, top, zNear, zFar);
glMatrixMode (GL_MODELVIEW);
}
Here, cx is the eye space center of the zNear plane in X. I request if someone could please explain how could I calculate this. I believe this should be the average of the initial first two arguments to glFrustum(). Am I right? Any help will be greatly appreciated.
It looks like what you want to do is maintain the field of view or angle of view when the aspect ratio changes. See the section titled 9.085 How can I make a call to glFrustum() that matches my call to gluPerspective()? of the OpenGL FAQ for details on how to do that. Here's the short version:
fov*0.5 = arctan ((top-bottom)*0.5 / near)
top = tan(fov*0.5) * near
bottom = -top
left = aspect * bottom
right = aspect * top
See the link for details.
The first two arguments are the X coordinates of the left and right clipping planes in eye space. Unless you are doing off-axis tricks (for example, to display uncentered projections across multiple monitors), left and right should have the same magnitude and opposite sign. Which would make your cx variable zero.
If you are having trouble understanding glFrustrum, you can always use gluPerspective instead, which has a somewhat simplified interface.