WinAPI get mouse cursor icon - c

I want to get the cursor icon in Windows.
I think language I use isn't very important here, so I will just write pseudo code with WinAPI functions I'm trying to use:
c = CURSORINFO.new(20, 1, 1, POINT.new(1,1));
GetCursorInfo(c); #provides correctly filled structure with hCursor
DrawIcon(GetWindowDC(GetForegroundWindow()), 1, 1, c.hCursor);
So this part works fine, it draws current cursor on active window.
But that's not what I want. I want to get an array of pixels, so I should draw it in memory.
I'm trying to do it like this:
hdc = CreateCompatibleDC(GetDC(0)); #returns non-zero int
canvas = CreateCompatibleBitmap(hdc, 256, 256); #returns non-zero int too
c = CURSORINFO.new(20, 1, 1, POINT.new(1,1));
GetCursorInfo(c);
DrawIcon(hdc, 1, 1, c.hCursor); #returns 1
GetPixel(hdc, 1, 1); #returns -1
Why doesn't GetPixel() return COLORREF? What am I missing?
I'm not very experienced with WinAPI, so I'm probably doing some stupid mistake.

You have to select the bitmap you create into the device context. If not, the GetPixel function will return CLR_INVALID (0xFFFFFFFF):
A bitmap must be selected within the device context, otherwise, CLR_INVALID is returned on all pixels.
Also, the pseudo-code you've shown is leaking objects badly. Whenever you call GetDC, you must call ReleaseDC when you're finished using it. And whenever you create a GDI object, you must destroy it when you're finished using it.
Finally, you appear to be assuming that the coordinates for the point of origin—that is, the upper left point—are (1, 1). They are actually (0, 0).
Here's the code I would write (error checking omitted for brevity):
// Get your device contexts.
HDC hdcScreen = GetDC(NULL);
HDC hdcMem = CreateCompatibleDC(hdcScreen);
// Create the bitmap to use as a canvas.
HBITMAP hbmCanvas = CreateCompatibleBitmap(hdcScreen, 256, 256);
// Select the bitmap into the device context.
HGDIOBJ hbmOld = SelectObject(hdcMem, hbmCanvas);
// Get information about the global cursor.
CURSORINFO ci;
ci.cbSize = sizeof(ci);
GetCursorInfo(&ci);
// Draw the cursor into the canvas.
DrawIcon(hdcMem, 0, 0, ci.hCursor);
// Get the color of the pixel you're interested in.
COLORREF clr = GetPixel(hdcMem, 0, 0);
// Clean up after yourself.
SelectObject(hdcMem, hbmOld);
DeleteObject(hbmCanvas);
DeleteDC(hdcMem);
ReleaseDC(hdcScreen);
But one final caveat—the DrawIcon function will probably not work as you expect. It is limited to drawing an icon or cursor at the default size. On most systems, that will be 32x32. From the documentation:
DrawIcon draws the icon or cursor using the width and height specified by the system metric values for icons; for more information, see GetSystemMetrics.
Instead, you probably want to use the DrawIconEx function. The following code will draw the cursor at the actual size of the resource:
DrawIconEx(hdcMem, 0, 0, ci.hCursor, 0, 0, 0, NULL, DI_NORMAL);

Related

Why does my window lag when I run multiple instances of it?

I created a Win32 window app that moves around the screen occasionally, sort of like a pet. As it moves, it switches between 2 bitmaps to show 'animation' of it moving. The implementation involves multiple WM_TIMER messages: One timer moves the window, another changes the bitmap and windows region (to only display the bitmap without the transparent parts) as it is moving, and another changes the direction the window moves.
The window runs perfectly smoothly by itself, but when I open multiple instances, the animations and movements start to lag - it is not so noticeable at 2 windows, but 3 instances and above causes every single window to start lagging very noticably. The movement and animations are choppy and even freeze occasionaly.
I have tried removing portions of the code to pinpoint the cause of the issue, and apparently this only occurs when a section of the following code is put in (I have marked it out with comments):
HBITMAP hBitMap = NULL;
BITMAP infoBitMap;
hBitMap = LoadBitmap(GetModuleHandle(NULL), IDB_BITMAP2);
if (hBitMap == NULL)
{
MessageBoxA(NULL, "COULD NOT LOAD PET BITMAP", "ERROR", MB_OK);
}
HRGN BaseRgn = CreateRectRgn(0, 0, 0, 0);
HDC winDC = GetDC(hwnd);
HDC hMem = CreateCompatibleDC(winDC);
GetObject(hBitMap, sizeof(infoBitMap), &infoBitMap);
HDC hMemOld = SelectObject(hMem, hBitMap);
COLORREF transparentCol = RGB(255, 255, 255);
for (int y = 0; y < infoBitMap.bmHeight; y++) //<<<< THIS SECTION ONWARDS
{
int x, xLeft, xRight;
x = 0;
do {
xLeft = xRight = 0;
while (x < infoBitMap.bmWidth && (GetPixel(hMem, x, y) == transparentCol))
{
x++;
}
xLeft = x;
while (x < infoBitMap.bmWidth && (GetPixel(hMem, x, y) != transparentCol))
{
x++;
}
xRight = x;
HRGN TempRgn;
TempRgn = CreateRectRgn(xLeft, y, xRight, y + 1);
int ret = CombineRgn(BaseRgn, BaseRgn, TempRgn, RGN_OR);
if (ret == ERROR)
{
MessageBoxA(NULL, "COMBINE REGION FAILED", "ERROR", MB_OK);
}
DeleteObject(TempRgn);
}
while (x < infoBitMap.bmWidth);
}
SetWindowRgn(hwnd, BaseRgn, TRUE); //<<<<---- UNTIL HERE
BitBlt(winDC, 0, 0, infoBitMap.bmWidth, infoBitMap.bmHeight, hMem, 0, 0, SRCCOPY);
SelectObject(hMem, hMemOld);
DeleteDC(hMem);
ReleaseDC(hwnd, winDC);
The commented section is the code I use to eliminate the transparent parts of the bitmap from being displayed in the window client region. It is run every time the app changes bitmap to display animation.
The app works perfectly fine if I remove that code, so I suspect this is causing the issue. Does someone know why this section of code causes lag, and ONLY with multiple instances open? Is there a way to deal with this lag?
You're iterating over each pixel in each update (correct me if I'm wrong.) which is a fairly slow process (relatively.)
A better option would be to use something like this: https://stackoverflow.com/a/3970218/19192256 to create a mask color and simply use masking to remove the transparent pixels.
creating multiple regions and concatenating them is a very slow and resource/cpu-intensive operation. Instead, use ExtCreateRegion() to create a single region from an array of rectangles.
Alternatively, forget using a region at all. Simply display your bitmap on the window normally and fill in the desired areas of the window with a unique color that you can make transparent using SetLayeredWindowAttributes(), as described in #Substitute's answer.

Scaled Layers in GDI

Original question
Basically, I have two bitmaps, and I want to put one behind the other, scaled down to half its size.
Both are centered, and are of the same resolution.
The catch is that I want to put more than one bitmap on this back layer eventually, and want the scaling to apply to the whole layer and not just the individual bitmap.
My thought is I would use a memory DC for the back layer, capture its contents into a bitmap of its own and use StretchBlt to place it in my main dc
The code I have right now doesn't work, and I can't make sense of it, let alone find anyone who had done this before for direction.
My variables at the moment are as follows
hBitmap - back bitmap
hFiller - front bitmap
hdc - main DC
ldc - back DC(made with CreateCompatibleDC(hdc);)
resh - width of hdc
resv - height of hdc
note that my viewport origin is set to the center
--this part above is solved, with the one major issue being that it does not keep the back layers...
Revised Question
Here's my code. Everything works as intended except for the fact that the layers do not properly stack. They seem to erase what is underneath or fill it with black.
For the record this is a direct copy of my code. I explain sections of it but there is nothing missing between the code blocks.
case WM_TIMER:
{
switch(wParam)
{
case FRAME:
If any position or rotation values have changed, the following section of code clears the screen and prepares it to be rewritten
if(reload == TRUE){
tdc = CreateCompatibleDC(hdc);
oldFiller = SelectObject(tdc,hFiller);
GetObject(hFiller, sizeof(filler), &filler);
StretchBlt(hdc, 0-(resh/2), 0-(resv/2), resh, resv, tdc, 0, 0, 1, 1, SRCCOPY);
SelectObject(tdc,oldFiller);
DeleteDC(tdc);
if(turn == TRUE){
xForm.eM11 = (FLOAT) cos(r/angleratio);
xForm.eM12 = (FLOAT) sin(r/angleratio);
xForm.eM21 = (FLOAT) -sin(r/angleratio);
xForm.eM22 = (FLOAT) cos(r/angleratio);
xForm.eDx = (FLOAT) 0.0;
xForm.eDy = (FLOAT) 0.0;
SetWorldTransform(hdc, &xForm);
}
This is the part that only partially works. At a distance of 80 my scale value will make my bitmap 1 pixel by 1 pixel, so I consider this my "draw distance"
It scales properly, but the layers do not stack, as I mentioned above
for(int i=80;i>1;i--){
tdc = CreateCompatibleDC(hdc);
tbm = CreateCompatibleBitmap(hdc, resh, resv);
SelectObject(tdc, tbm);
BitBlt(tdc, 0-(resh/2), 0-(resv/2), resh, resv,hdc,0,0,SRCCOPY);
//drawing code goes in here
ldc = CreateCompatibleDC(hdc);
oldBitmap = SelectObject(ldc,hBitmap);
StretchBlt(tdc,(int)(angleratio*atan((double)128/(double)i)),0,(int)(angleratio*atan((double)128/(double)i)),(int)(angleratio*atan((double)128/(double)i)),ldc,0,0,128,128,SRCCOPY);
SelectObject(ldc,oldBitmap);
DeleteDC(ldc);
BitBlt(hdc, 0, 0, resh, resv, tdc, 0, 0, SRCCOPY);
DeleteObject(tbm);
DeleteDC(tdc);
}
reload = FALSE;
}
This section below just checks for keyboard input which changes the position or rotation of the "camera"
This part works fine and can be ignored
if(GetKeyboardState(NULL)==TRUE){
reload = TRUE;
if(GetKeyState(VK_UP)<0){
fb--;
}
if(GetKeyState(VK_DOWN)<0){
fb++;
}
if(GetKeyState(VK_RIGHT)<0){
lr--;
}
if(GetKeyState(VK_LEFT)<0){
lr++;
}
if(GetKeyState(0x57)<0){
p++;
}
if(GetKeyState(0x53)<0){
p--;
}
}
break;
}
}
break;

How do i get the actual position of vertices in OpenGL ES 2.0

After Applying a rotation or a translation matrix on the vertex array, the vertex buffer is not updated
So how can i get the position of vertices after applying the matrix?
here's the onDrawFrame() function
public void onDrawFrame(GL10 gl) {
PositionHandle = GLES20.glGetAttribLocation(Program,"vPosition");
MatrixHandle = GLES20.glGetUniformLocation(Program,"uMVPMatrix");
ColorHandle = GLES20.glGetUniformLocation(Program,"vColor");
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT|GLES20.GL_DEPTH_BUFFER_BIT );
Matrix.rotateM(RotationMatrix,0,-90f,1,0,0);
Matrix.multiplyMM(vPMatrix,0,projectionMatrix,0,viewMatrix,0);
Matrix.multiplyMM(vPMatrix,0,vPMatrix,0,RotationMatrix,0);
GLES20.glUniformMatrix4fv(MatrixHandle, 1, false, vPMatrix, 0);
GLES20.glUseProgram(Program);
GLES20.glEnableVertexAttribArray(PositionHandle);
GLES20.glVertexAttribPointer(PositionHandle,3,GLES20.GL_FLOAT,false,0,vertexbuffer);
GLES20.glUniform4fv(ColorHandle,1,color,1);
GLES20.glDrawArrays(GLES20.GL_TRIANGLES,0,6);
GLES20.glDisableVertexAttribArray(PositionHandle);
}
The GPU doesn't normally write back transformed results anywhere the application can use them. It's possible in ES 3.0 with transform feedback, BUT it's very expensive.
For touch event "hit" testing, you generally don't want to use the raw geometry. Generally use some simple proxy geometry, which can be transformed in software on the CPU.
Maybe you should try this:
private float[] modelViewMatrix = new float[16];
...
Matrix.rotateM(RotationMatrix, 0, -90f, 1, 0, 0);
Matrix.multiplyMM(modelViewMatrix, 0, viewMatrix, 0, RotationMatrix, 0);
Matrix.multiplyMM(vpMatrix, 0, projectionMatrix, 0, modelViewMatrix, 0);
You can use the vertex movement calculations in the CPU, and then use the GLU.gluProject() function to convert the coordinates of the vertex of the object in pixels of the screen. This data can be used when working with touch events.
private var view: IntArray = intArrayOf(0, 0, widthScreen, heightScreen)
...
GLU.gluProject(modelX, modelY, modelZ, mvMatrix, 0,
projectionMatrix, 0, view, 0,
coordinatesWindow, 0)
...
// coordinates in pixels of the screen
val x = coordinatesWindow[0]
val y = coordinatesWindow[1]

Using GDI+ in C - gdiPlusStartup function returning 2

I am attempting to use GDI+ in my C application to take a screenshot and save it as JPEG. I am using GDI+ to convert the BMP to JPEG but apparently when calling the GdiplusStartup function, the return code is 2(invalid parameter) instead of 0:
int main()
{
GdiplusStartupInput gdiplusStartupInput;
ULONG_PTR gdiplusToken;
//if(GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL) != 0)
// printf("GDI NOT WORKING\n");
printf("%d",GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL));
HDC hdc = GetDC(NULL); // get the desktop device context
HDC hDest = CreateCompatibleDC(hdc); // create a device context to use yourself
// get the height and width of the screen
int height = GetSystemMetrics(SM_CYVIRTUALSCREEN);
int width = GetSystemMetrics(SM_CXVIRTUALSCREEN);
// create a bitmap
HBITMAP hbDesktop = CreateCompatibleBitmap( hdc, width, height);
// use the previously created device context with the bitmap
SelectObject(hDest, hbDesktop);
// copy from the desktop device context to the bitmap device context
// call this once per 'frame'
BitBlt(hDest, 0,0, width, height, hdc, 0, 0, SRCCOPY);
// after the recording is done, release the desktop context you got..
ReleaseDC(NULL, hdc);
// ..and delete the context you created
DeleteDC(hDest);
SaveJpeg(hbDesktop,"a.jpeg",100);
GdiplusShutdown(gdiplusToken);
return 0;
}
I am trying to figure out why the GdiplusStartup function is not working.
Any thoughts?
Initialize gdiplusStartupInput variable with the following values: GdiplusVersion = 1, DebugEventCallback = NULL, SuppressBackgroundThread = FALSE, SuppressExternalCodecs = FALSE
According to MSDN article GdiplusStartup function http://msdn.microsoft.com/en-us/library/windows/desktop/ms534077%28v=vs.85%29.aspx
GdiplusStartupInput structure has default constructor which initializes the structure with these values. Since you call the function from C, constructor is not working and structure remains uninitialized. Provide your own initialization code to solve the problem.
// As Global
ULONG_PTR gdiplusToken;
// In top of main
GdiplusStartupInput gdiplusStartupInput;
GdiplusStartup(&programInfo.gdiplusToken, &gdiplusStartupInput, NULL);
works for me.

Is it possible to create an XOR pen like DrawFocusRect()?

The Win32 GDI DrawFocusRect(HDC, const RECT*) function draws the dotted outline of a rectangle on the desired devince context. The cool thing about this function is it draws the dots using an XOR function so that when you call it a second time on the same device context and rectangle, it erases itself:
RECT rc = { 0, 0, 100, 100 };
DrawFocusRect(hdc, &rc); // draw rectangle
DrawFocusRect(hdc, &rc); // erase the rectangle we just drew
I want to achieve the same dotted line effect as DrawFocusRect() but I just want a line, not a whole rectangle. I tried doing this by passing a RECT of height 1 to DrawFocusRect() but this doesn't work because it XORs the "bottom line" of the rectange on top of the top line so nothing gets painted.
Can I create a plain HPEN that achieves the same effect as DrawFocusRect() so I can draw just a single line?
As #IInspectable commented, you want to use SetROP2(). The other half of the battle is creating the correct pen. Here is how the whole thing shakes out:
HPEN create_focus_pen()
{
LONG width(1);
SystemParametersInfo(SPI_GETFOCUSBORDERHEIGHT, 0, &width, 0);
LOGBRUSH lb = { }; // initialize to zero
lb.lbColor = 0xffffff; // white
lb.lbStyle = BS_SOLID;
return ExtCreatePen(PS.GEOMETRIC | PS.DOT, width, &lb, 0, 0);
}
void draw_focus_line(HDC hdc, HPEN hpen, POINT from, POINT to)
{
HPEN old_pen = SelectObject(hdc, hpen);
int old_rop = SetROP2(R2_XORPEN);
MoveToEx(hdc, from.x, from.y, nullptr);
LineTo(hdc, to.x, to.y);
SelectObject(hdc, old_pen);
SetROP2(old_rop);
}

Resources