Scaled Layers in GDI - c

Original question
Basically, I have two bitmaps, and I want to put one behind the other, scaled down to half its size.
Both are centered, and are of the same resolution.
The catch is that I want to put more than one bitmap on this back layer eventually, and want the scaling to apply to the whole layer and not just the individual bitmap.
My thought is I would use a memory DC for the back layer, capture its contents into a bitmap of its own and use StretchBlt to place it in my main dc
The code I have right now doesn't work, and I can't make sense of it, let alone find anyone who had done this before for direction.
My variables at the moment are as follows
hBitmap - back bitmap
hFiller - front bitmap
hdc - main DC
ldc - back DC(made with CreateCompatibleDC(hdc);)
resh - width of hdc
resv - height of hdc
note that my viewport origin is set to the center
--this part above is solved, with the one major issue being that it does not keep the back layers...
Revised Question
Here's my code. Everything works as intended except for the fact that the layers do not properly stack. They seem to erase what is underneath or fill it with black.
For the record this is a direct copy of my code. I explain sections of it but there is nothing missing between the code blocks.
case WM_TIMER:
{
switch(wParam)
{
case FRAME:
If any position or rotation values have changed, the following section of code clears the screen and prepares it to be rewritten
if(reload == TRUE){
tdc = CreateCompatibleDC(hdc);
oldFiller = SelectObject(tdc,hFiller);
GetObject(hFiller, sizeof(filler), &filler);
StretchBlt(hdc, 0-(resh/2), 0-(resv/2), resh, resv, tdc, 0, 0, 1, 1, SRCCOPY);
SelectObject(tdc,oldFiller);
DeleteDC(tdc);
if(turn == TRUE){
xForm.eM11 = (FLOAT) cos(r/angleratio);
xForm.eM12 = (FLOAT) sin(r/angleratio);
xForm.eM21 = (FLOAT) -sin(r/angleratio);
xForm.eM22 = (FLOAT) cos(r/angleratio);
xForm.eDx = (FLOAT) 0.0;
xForm.eDy = (FLOAT) 0.0;
SetWorldTransform(hdc, &xForm);
}
This is the part that only partially works. At a distance of 80 my scale value will make my bitmap 1 pixel by 1 pixel, so I consider this my "draw distance"
It scales properly, but the layers do not stack, as I mentioned above
for(int i=80;i>1;i--){
tdc = CreateCompatibleDC(hdc);
tbm = CreateCompatibleBitmap(hdc, resh, resv);
SelectObject(tdc, tbm);
BitBlt(tdc, 0-(resh/2), 0-(resv/2), resh, resv,hdc,0,0,SRCCOPY);
//drawing code goes in here
ldc = CreateCompatibleDC(hdc);
oldBitmap = SelectObject(ldc,hBitmap);
StretchBlt(tdc,(int)(angleratio*atan((double)128/(double)i)),0,(int)(angleratio*atan((double)128/(double)i)),(int)(angleratio*atan((double)128/(double)i)),ldc,0,0,128,128,SRCCOPY);
SelectObject(ldc,oldBitmap);
DeleteDC(ldc);
BitBlt(hdc, 0, 0, resh, resv, tdc, 0, 0, SRCCOPY);
DeleteObject(tbm);
DeleteDC(tdc);
}
reload = FALSE;
}
This section below just checks for keyboard input which changes the position or rotation of the "camera"
This part works fine and can be ignored
if(GetKeyboardState(NULL)==TRUE){
reload = TRUE;
if(GetKeyState(VK_UP)<0){
fb--;
}
if(GetKeyState(VK_DOWN)<0){
fb++;
}
if(GetKeyState(VK_RIGHT)<0){
lr--;
}
if(GetKeyState(VK_LEFT)<0){
lr++;
}
if(GetKeyState(0x57)<0){
p++;
}
if(GetKeyState(0x53)<0){
p--;
}
}
break;
}
}
break;

Related

Why does my window lag when I run multiple instances of it?

I created a Win32 window app that moves around the screen occasionally, sort of like a pet. As it moves, it switches between 2 bitmaps to show 'animation' of it moving. The implementation involves multiple WM_TIMER messages: One timer moves the window, another changes the bitmap and windows region (to only display the bitmap without the transparent parts) as it is moving, and another changes the direction the window moves.
The window runs perfectly smoothly by itself, but when I open multiple instances, the animations and movements start to lag - it is not so noticeable at 2 windows, but 3 instances and above causes every single window to start lagging very noticably. The movement and animations are choppy and even freeze occasionaly.
I have tried removing portions of the code to pinpoint the cause of the issue, and apparently this only occurs when a section of the following code is put in (I have marked it out with comments):
HBITMAP hBitMap = NULL;
BITMAP infoBitMap;
hBitMap = LoadBitmap(GetModuleHandle(NULL), IDB_BITMAP2);
if (hBitMap == NULL)
{
MessageBoxA(NULL, "COULD NOT LOAD PET BITMAP", "ERROR", MB_OK);
}
HRGN BaseRgn = CreateRectRgn(0, 0, 0, 0);
HDC winDC = GetDC(hwnd);
HDC hMem = CreateCompatibleDC(winDC);
GetObject(hBitMap, sizeof(infoBitMap), &infoBitMap);
HDC hMemOld = SelectObject(hMem, hBitMap);
COLORREF transparentCol = RGB(255, 255, 255);
for (int y = 0; y < infoBitMap.bmHeight; y++) //<<<< THIS SECTION ONWARDS
{
int x, xLeft, xRight;
x = 0;
do {
xLeft = xRight = 0;
while (x < infoBitMap.bmWidth && (GetPixel(hMem, x, y) == transparentCol))
{
x++;
}
xLeft = x;
while (x < infoBitMap.bmWidth && (GetPixel(hMem, x, y) != transparentCol))
{
x++;
}
xRight = x;
HRGN TempRgn;
TempRgn = CreateRectRgn(xLeft, y, xRight, y + 1);
int ret = CombineRgn(BaseRgn, BaseRgn, TempRgn, RGN_OR);
if (ret == ERROR)
{
MessageBoxA(NULL, "COMBINE REGION FAILED", "ERROR", MB_OK);
}
DeleteObject(TempRgn);
}
while (x < infoBitMap.bmWidth);
}
SetWindowRgn(hwnd, BaseRgn, TRUE); //<<<<---- UNTIL HERE
BitBlt(winDC, 0, 0, infoBitMap.bmWidth, infoBitMap.bmHeight, hMem, 0, 0, SRCCOPY);
SelectObject(hMem, hMemOld);
DeleteDC(hMem);
ReleaseDC(hwnd, winDC);
The commented section is the code I use to eliminate the transparent parts of the bitmap from being displayed in the window client region. It is run every time the app changes bitmap to display animation.
The app works perfectly fine if I remove that code, so I suspect this is causing the issue. Does someone know why this section of code causes lag, and ONLY with multiple instances open? Is there a way to deal with this lag?
You're iterating over each pixel in each update (correct me if I'm wrong.) which is a fairly slow process (relatively.)
A better option would be to use something like this: https://stackoverflow.com/a/3970218/19192256 to create a mask color and simply use masking to remove the transparent pixels.
creating multiple regions and concatenating them is a very slow and resource/cpu-intensive operation. Instead, use ExtCreateRegion() to create a single region from an array of rectangles.
Alternatively, forget using a region at all. Simply display your bitmap on the window normally and fill in the desired areas of the window with a unique color that you can make transparent using SetLayeredWindowAttributes(), as described in #Substitute's answer.

Is it possible to create an XOR pen like DrawFocusRect()?

The Win32 GDI DrawFocusRect(HDC, const RECT*) function draws the dotted outline of a rectangle on the desired devince context. The cool thing about this function is it draws the dots using an XOR function so that when you call it a second time on the same device context and rectangle, it erases itself:
RECT rc = { 0, 0, 100, 100 };
DrawFocusRect(hdc, &rc); // draw rectangle
DrawFocusRect(hdc, &rc); // erase the rectangle we just drew
I want to achieve the same dotted line effect as DrawFocusRect() but I just want a line, not a whole rectangle. I tried doing this by passing a RECT of height 1 to DrawFocusRect() but this doesn't work because it XORs the "bottom line" of the rectange on top of the top line so nothing gets painted.
Can I create a plain HPEN that achieves the same effect as DrawFocusRect() so I can draw just a single line?
As #IInspectable commented, you want to use SetROP2(). The other half of the battle is creating the correct pen. Here is how the whole thing shakes out:
HPEN create_focus_pen()
{
LONG width(1);
SystemParametersInfo(SPI_GETFOCUSBORDERHEIGHT, 0, &width, 0);
LOGBRUSH lb = { }; // initialize to zero
lb.lbColor = 0xffffff; // white
lb.lbStyle = BS_SOLID;
return ExtCreatePen(PS.GEOMETRIC | PS.DOT, width, &lb, 0, 0);
}
void draw_focus_line(HDC hdc, HPEN hpen, POINT from, POINT to)
{
HPEN old_pen = SelectObject(hdc, hpen);
int old_rop = SetROP2(R2_XORPEN);
MoveToEx(hdc, from.x, from.y, nullptr);
LineTo(hdc, to.x, to.y);
SelectObject(hdc, old_pen);
SetROP2(old_rop);
}

opengl failing to draw mesh

SOLVED: I'm not really sure how though... thanks for all your help guys.
I tried glDisable(GL_CULL_FACE); but the mesh is still not visible.
Basically I'm trying to draw a mesh (made from verts, normals, and texture coords) in OpenGL, using a display list. The mesh is on .obj format (exported from 3ds max 2013)
The problem is that the mesh is not visible.
To draw the display list I'm just using glCallLists (list, 1);
I have verified that I can draw things to the screen by drawing a point in the center of the screen and that works fine.
Could it be possible that the camera is positioned inside the mesh? If so is there an OpenGL state that I could enable to allow me to see the inside of a set of verts?
I know that the data I have is all valid, verified by printing each vert, normal and texture coord to a file before adding it to the display list, it looks valid.
I have dont no glTranslatef or anything like that, my projection matrix is setup like this:
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
gluPerspective (45.0, (float)1024/(float)768, -9999, 9999);
glMatrixMode (GL_MODELVIEW);
glLoadIdentity ();
If you want to have a look at the .obj file, here it is: http://pastebin.com/PpG3vG5e
This is how I create the display list:
list = glGenLists (1);
glNewList (list, GL_COMPILE);
glBegin (GL_TRIANGLES);
for (i = 0; i < data.face_count; i++)
{
// first vert
normal[0][0] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[0]]->e[0];
normal[0][1] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[0]]->e[1];
normal[0][2] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[0]]->e[2];
tex[0][0] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[0]]->e[0];
tex[0][1] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[0]]->e[1];
tex[0][2] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[0]]->e[2];
vert[0][0] = (float)data.vertex_list[data.face_list[i]->vertex_index[0]]->e[0];
vert[0][1] = (float)data.vertex_list[data.face_list[i]->vertex_index[0]]->e[1];
vert[0][2] = (float)data.vertex_list[data.face_list[i]->vertex_index[0]]->e[2];
// second vert
normal[1][0] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[1]]->e[0];
normal[1][1] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[1]]->e[1];
normal[1][2] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[1]]->e[2];
tex[1][0] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[1]]->e[0];
tex[1][1] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[1]]->e[1];
tex[1][2] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[1]]->e[2];
vert[1][0] = (float)data.vertex_list[data.face_list[i]->vertex_index[1]]->e[0];
vert[1][1] = (float)data.vertex_list[data.face_list[i]->vertex_index[1]]->e[1];
vert[1][2] = (float)data.vertex_list[data.face_list[i]->vertex_index[1]]->e[2];
// third vert
normal[2][0] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[2]]->e[0];
normal[2][1] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[2]]->e[1];
normal[2][2] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[2]]->e[2];
tex[2][0] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[2]]->e[0];
tex[2][1] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[2]]->e[1];
tex[2][2] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[2]]->e[2];
vert[2][0] = (float)data.vertex_list[data.face_list[i]->vertex_index[2]]->e[0];
vert[2][1] = (float)data.vertex_list[data.face_list[i]->vertex_index[2]]->e[1];
vert[2][2] = (float)data.vertex_list[data.face_list[i]->vertex_index[2]]->e[2];
for (j = 0; j < 3; j++)
{
glNormal3f (normal[j][0], normal[j][1], normal[j][2]);
glTexCoord3f (tex[j][0], tex[j][1], tex[j][2]);
glVertex3f (vert[j][0], vert[j][1], vert[j][2]);
}
}
glEnd ();
glEndList ();
EDIT:
I've tried things like:
glTranslatef (0, 0, 5);
glCallList (mesh);
glTranslatef (0, 0, 0);
but they don't work either :(
EDIT:
#datenwolf
Here is the code I use to draw it:
Draw_Begin ();
Mdl_Draw (list, 0.0f, 0.0f, 0.0f);
Draw_End ();
This
gluPerspective (45.0, (float)1024/(float)768, -9999, 9999);
is wrong. In a perspective projection both the near and the far plane distance must be of the same sign, i.e. both positive or both negative. Also the absolute value of the near plane must be smaller than the absolute value of the far plane. And the near plane distance must be nonzero. In mathematical notation:
sgn(near) = sgn(far) ^ 0 < |near| < |far|
Usually both near and far are chosen positive. Also as a rule of thumb the near clipping plane should be chosen as fer away as possible. The far plane can be placed at infinity (exploting some of the properties of homogenous matrices), but usually is placed as close as possible to max out depth buffer resolution.

WinAPI get mouse cursor icon

I want to get the cursor icon in Windows.
I think language I use isn't very important here, so I will just write pseudo code with WinAPI functions I'm trying to use:
c = CURSORINFO.new(20, 1, 1, POINT.new(1,1));
GetCursorInfo(c); #provides correctly filled structure with hCursor
DrawIcon(GetWindowDC(GetForegroundWindow()), 1, 1, c.hCursor);
So this part works fine, it draws current cursor on active window.
But that's not what I want. I want to get an array of pixels, so I should draw it in memory.
I'm trying to do it like this:
hdc = CreateCompatibleDC(GetDC(0)); #returns non-zero int
canvas = CreateCompatibleBitmap(hdc, 256, 256); #returns non-zero int too
c = CURSORINFO.new(20, 1, 1, POINT.new(1,1));
GetCursorInfo(c);
DrawIcon(hdc, 1, 1, c.hCursor); #returns 1
GetPixel(hdc, 1, 1); #returns -1
Why doesn't GetPixel() return COLORREF? What am I missing?
I'm not very experienced with WinAPI, so I'm probably doing some stupid mistake.
You have to select the bitmap you create into the device context. If not, the GetPixel function will return CLR_INVALID (0xFFFFFFFF):
A bitmap must be selected within the device context, otherwise, CLR_INVALID is returned on all pixels.
Also, the pseudo-code you've shown is leaking objects badly. Whenever you call GetDC, you must call ReleaseDC when you're finished using it. And whenever you create a GDI object, you must destroy it when you're finished using it.
Finally, you appear to be assuming that the coordinates for the point of origin—that is, the upper left point—are (1, 1). They are actually (0, 0).
Here's the code I would write (error checking omitted for brevity):
// Get your device contexts.
HDC hdcScreen = GetDC(NULL);
HDC hdcMem = CreateCompatibleDC(hdcScreen);
// Create the bitmap to use as a canvas.
HBITMAP hbmCanvas = CreateCompatibleBitmap(hdcScreen, 256, 256);
// Select the bitmap into the device context.
HGDIOBJ hbmOld = SelectObject(hdcMem, hbmCanvas);
// Get information about the global cursor.
CURSORINFO ci;
ci.cbSize = sizeof(ci);
GetCursorInfo(&ci);
// Draw the cursor into the canvas.
DrawIcon(hdcMem, 0, 0, ci.hCursor);
// Get the color of the pixel you're interested in.
COLORREF clr = GetPixel(hdcMem, 0, 0);
// Clean up after yourself.
SelectObject(hdcMem, hbmOld);
DeleteObject(hbmCanvas);
DeleteDC(hdcMem);
ReleaseDC(hdcScreen);
But one final caveat—the DrawIcon function will probably not work as you expect. It is limited to drawing an icon or cursor at the default size. On most systems, that will be 32x32. From the documentation:
DrawIcon draws the icon or cursor using the width and height specified by the system metric values for icons; for more information, see GetSystemMetrics.
Instead, you probably want to use the DrawIconEx function. The following code will draw the cursor at the actual size of the resource:
DrawIconEx(hdcMem, 0, 0, ci.hCursor, 0, 0, 0, NULL, DI_NORMAL);

How would I map a camera image to create a live funhouse mirror using opencv?

Using Opencv and Linux I would like to create a fun-house mirror effect, short and squat, tall and thin effect using a live webcamera. My daughter loves those things and I would like to create one using a camera. I am not quite sure about the transforms necessary for these effects. Any help would be appreciated. I have much of the framework running, live video playing and such, just not the transforms.
thanx
I think that you need to use 'radial' transforms and 'pin cushion' which is inverse radial.
In order to braker the symmetry of the transforms you can strech the image before and after:
Suppose your image is 300x300
pixels.
Strech it to 300x600 or
600x300 using cvResize()
Apply transform: radial, pincushion or
sinusoidal
Strech back to 300x300
I never used radial or sinusoidal transforms in openCV so I dont have a piece of code to attach. But you can use cvUndistort2() and see if it is OK.
Create window with trackbars with range 0..100. Each trackbar controls parameter of distortion:
static IplImage* srcImage;
static IplImage* dstImage;
static double _camera[9];
static double _dist4Coeff[4]; // This is the transformation matrix
static int _r = 50; // Radial transform. 50 in range 0..100
static int _tX = 50; // Tangetial coef in X directio
static int _tY = 50; // Tangetial coef in Y directio
static int allRange = 50;
// Open windows
cvNamedWindow(winName, 1);
// Add track bars.
cvShowImage(winName, srcImage );
cvCreateTrackbar("Radial", winName, &_r , 2*allRange, callBackFun);
cvCreateTrackbar("Tang X", winName, &_tX , 2*allRange, callBackFun);
cvCreateTrackbar("Tang Y", winName, &_tY , 2*allRange, callBackFun);
callBackFun(0);
// The distortion call back
void callBackFun(int arg){
CvMat intrCamParamsMat = cvMat( 3, 3, CV_64F, _camera );
CvMat dist4Coeff = cvMat( 1, 4, CV_64F, _dist4Coeff );
// Build distortion coefficients matrix.
dist4Coeff.data.db[0] = (_r-allRange*1.0)/allRange*1.0;
dist4Coeff.data.db[1] = (_r-allRange*1.0)/allRange*1.0;
dist4Coeff.data.db[2] = (_tY-allRange*1.0)/allRange*1.0;
dist4Coeff.data.db[3] = (_tX-allRange*1.0)/allRange*1.0;
// Build intrinsic camera parameters matrix.
intrCamParamsMat.data.db[0] = 587.1769751432448200/2.0;
intrCamParamsMat.data.db[1] = 0.;
intrCamParamsMat.data.db[2] = 319.5000000000000000/2.0+0;
intrCamParamsMat.data.db[3] = 0.;
intrCamParamsMat.data.db[4] = 591.3189722549362800/2.0;
intrCamParamsMat.data.db[5] = 239.5000000000000000/2.0+0;
intrCamParamsMat.data.db[6] = 0.;
intrCamParamsMat.data.db[7] = 0.;
intrCamParamsMat.data.db[8] = 1.;
// Apply transformation
cvUndistort2( srcImage, dstImage, &intrCamParamsMat, &dist4Coeff );
cvShowImage( winName, dstImage );
}

Resources