Bitblt from directx application - c

I got that code to get the pixel color from current mouse position.
It works well but the only problem is, I can't get it from an d3d application...
I tried it few times, but it only get only black color -
Red: 0
Green: 0
Blue: 0
Here's my code -
#include <stdio.h>
#include <stdlib.h>
#include <windows.h>
#include <d3d9.h>
HWND hWindow;
HDC hScreen;
HDC hdcMem;
HBITMAP hBitmap;
HGDIOBJ hOld;
int sX, sY, x, y;
BYTE* sData = 0;
POINT cursorPos;
int main()
{
int Red, Green, Blue;
hScreen = GetDC(hWindow);
sX = GetDeviceCaps(hScreen, HORZRES);
sY = GetDeviceCaps(hScreen, VERTRES);
hdcMem = CreateCompatibleDC (hScreen);
hBitmap = CreateCompatibleBitmap(hScreen, sX, sY);
BITMAPINFOHEADER bm = {0};
bm.biSize = sizeof(BITMAPINFOHEADER);
bm.biPlanes = 1;
bm.biBitCount = 32;
bm.biWidth = sX;
bm.biHeight = -sY;
bm.biCompression = BI_RGB;
bm.biSizeImage = 0; // 3 * sX * sY;
while (1) {
hOld = SelectObject(hdcMem, hBitmap);
BitBlt(hdcMem, 0, 0, sX, sY, hScreen, 0, 0, SRCCOPY);
SelectObject(hdcMem, hOld);
free(sData);
sData = (BYTE*)malloc(4 * sX * sY);
GetDIBits(hdcMem, hBitmap, 0, sY, sData, (BITMAPINFO*)&bm, DIB_RGB_COLORS);
GetCursorPos(&cursorPos);
x = cursorPos.x;
y = cursorPos.y;
Red = sData[4 * ( (y * sX) + x) +2];
Green = sData[4 * ( ( y * sX) + x) +1];
Blue = sData[4 * ( (y * sX) + x)];
printf("\nRed: %d\nGreen: %d\nBlue: %d\n", Red, Green, Blue);
Sleep(300);
}
}
Thanks!

Which kind of d3d application do you use? if the application use an Overlay surface, you can't get anything with code above. Overlay surface is widely used in Video players, it's totally different with normal surfaces in DirectX, the normal screen shot software can only catch data from primary surface, and Microsoft didn't provide any public interface to get data from Overlay surfaces, but some software can do this, the most common way is to hook DirectX, that's a different topic.
If your d3d application didn't use Overlay surface, you can use DiretX to get the data from screen, then get the pixel from the screen data you want.
Use CreateOffscreenPlainSurface to create an offscreen surface
Use GetFrontBufferData to get the data from screen
Lock the surface and read the pixel to get the color

Related

How to draw circle with sin and cos in C?

I have a problem about make some circle orbit (solar system stimulation) in C.
Actually, I did it about a day. but I can't figure it out.
First, how to change the planets movement speed?
Some friends told me that I can use "If" for speed arrange, but I failed....
Second, Location setting. I draw some circle with ellipse but I don't know how to make orbit.
my earth orbit goes wrong... there are some codes that I made.
#include <Windows.h>
#include <stdio.h>
#include <math.h>
#define solar_size 30
#define earth_size 16
#define PI 3.141592654
#define MOVE_SPEED 3
#define rad angle*180/PI
int angle;
double sun_x,sun_y,earth_x,earth_y;
double x,y;
int dx;
int dy;
int i;
int main(void) {
HWND hwnd = GetForegroundWindow();
HDC hdc = GetWindowDC(hwnd);
SelectObject(hdc, CreateSolidBrush(RGB(0, 0, 0)));
Rectangle(hdc, 0, 0, GetSystemMetrics(SM_CXSCREEN), GetSystemMetrics(SM_CYSCREEN));
TextOut(hdc, 250, 450, L"solar system Simulation", 23);
while (1) {
sun_x = 250;
sun_y = 250;
earth_x = sun_x + 40;
earth_y = sun_y + 40;
SelectObject(hdc, CreatePen(PS_SOLID, 3, RGB(255, 0, 0)));
SelectObject(hdc, CreateSolidBrush(RGB(255, 0, 0)));
Ellipse(hdc, sun_x, sun_y, sun_x + solar_size, sun_y + solar_size);
for (angle = 0; angle <= 360; angle++) {
SelectObject(hdc, CreatePen(PS_SOLID, 3, RGB(0, 0, 220)));
SelectObject(hdc, CreateSolidBrush(RGB(0, 0, 220)));
Ellipse(hdc, earth_x, earth_y, earth_x + earth_size, earth_y + earth_size);
Sleep(50);
SelectObject(hdc, CreatePen(PS_SOLID, 3, RGB(0, 0, 0)));
SelectObject(hdc, CreateSolidBrush(RGB(0, 0, 0)));
Ellipse(hdc, earth_x + 30, earth_y + 30, earth_x + earth_size, earth_y + earth_size);
earth_x = 40 * cos(rad)+40;
earth_y = 40 * sin(rad)+40;
}
continue;
}
}
Your code suffers from being unorganized. For example, you mix the variables like earth_size and hard-wired numbers like 16 freely. That's a recipe for disaster. Please try to be more systematic.
It's hard not to make an answer a laundry list of errors.
The position (px, py) on a circle with centre (cx, cy) and radius r is:
px = cx + r * cos(angle)
py = cy + r * sin(angle)
Therefore, your initialization is wrong:
earth_x = sun_x + 40;
earth_y = sun_y + 40; // should be just sun_y
The way the Ellipse function in GDI works, your drawing command should look like:
Ellipse(hdc, earth_x - earth_size / 2, earth_y - earth_size / 2,
earth_x + earth_size / 2, earth_y + earth_size / 2);
Perhaps it is useful to collect the data of the celestial bodies (position, size, colour) in a struct and write a drawing function for it that just says DrawBody(hdc, earth), so that you don't have to repeat (viz copy and paste) the drawing code.
As for your speed: One possible source of error is here:
#define rad angle*180/PI
That's the wrong way round.
Finally, learn how SelectObject works: You should save the return value in a variable and select it back to the DC after you are done. If you don't do that, you will leak GDI objects. You can see how many GDI objects your application uses in the Task Manager. If that number grows constantly, you are leaking objects. Eventually, your application will behave strangely.

How to draw RGB bitmap to window using GDI?

I have an image in memory with the following byte layout
blue, green, red, alpha (32 bits per pixel)
The alpha is not used.
I want to draw it to a window using GDI. Later I may want to draw only a smaller part of it to the window. But the bitmap in memory is always fixed at a certain width & height.
How can this bitmap drawing operation be done?
SetDIBitsToDevice and/or StretchDIBits can be used to draw pixel data directly to a HDC if the pixel data is in a format that can be specified in a BITMAPINFOHEADER. If your color values are not in the correct order you must set the compression to BI_BITFIELDS instead of BI_RGB and append 3 DWORDs as the color mask after BITMAPINFOHEADER in memory.
case WM_PAINT:
{
RECT rc;
GetClientRect(hWnd, &rc);
PAINTSTRUCT ps;
HDC hDC = wParam ? (HDC) wParam : BeginPaint(hWnd, &ps);
static const UINT32 pixeldata[] = { ARGB(255,255,0,0), ARGB(255,255,0,255), ARGB(255,255,255,0), ARGB(255,0,0,0) };
BYTE bitmapinfo[FIELD_OFFSET(BITMAPINFO,bmiColors) + (3 * sizeof(DWORD))];
BITMAPINFOHEADER &bih = *(BITMAPINFOHEADER*) bitmapinfo;
bih.biSize = sizeof(BITMAPINFOHEADER);
bih.biWidth = 2, bih.biHeight = 2;
bih.biPlanes = 1, bih.biBitCount = 32;
bih.biCompression = BI_BITFIELDS, bih.biSizeImage = 0;
bih.biClrUsed = bih.biClrImportant = 0;
DWORD *pMasks = (DWORD*) (&bitmapinfo[bih.biSize]);
pMasks[0] = 0xff0000; // Red
pMasks[1] = 0x00ff00; // Green
pMasks[2] = 0x0000ff; // Blue
StretchDIBits(hDC, 0, 0, rc.right, rc.bottom, 0, 0, 2, 2, pixeldata, (BITMAPINFO*) &bih, DIB_RGB_COLORS, SRCCOPY);
return !(wParam || EndPaint(hWnd, &ps));
}

Improving screen capture performance

I am going to create some kind of "remote desktop" application that streams the content of the screen over a socket to a connected client.
In order to take a screenshot, I've come up with the following piece of code, which is a modified version of examples I've seen here and there.
#include <windows.h>
#include <tchar.h>
#include <stdio.h>
int _tmain( int argc, _TCHAR * argv[] )
{
int ScreenX = 0;
int ScreenY = 0;
BYTE* ScreenData = 0;
HDC hScreen = GetDC(GetDesktopWindow());
ScreenX = GetDeviceCaps(hScreen, HORZRES);
ScreenY = GetDeviceCaps(hScreen, VERTRES);
ScreenData = (BYTE*)calloc(4 * ScreenX * ScreenY, sizeof(BYTE) );
BITMAPINFOHEADER bmi = {0};
bmi.biSize = sizeof(BITMAPINFOHEADER);
bmi.biPlanes = 1;
bmi.biBitCount = 32;
bmi.biWidth = ScreenX;
bmi.biHeight = -ScreenY;
bmi.biCompression = BI_RGB;
bmi.biSizeImage = 0; // 3 * ScreenX * ScreenY;
int iBegTc = ::GetTickCount();
// Take 100 screen captures for a more accurante measurement of the duration.
for( int i = 0; i < 100; ++i )
{
HBITMAP hBitmap = CreateCompatibleBitmap(hScreen, ScreenX, ScreenY);
HDC hdcMem = CreateCompatibleDC (hScreen);
HGDIOBJ hOld = SelectObject(hdcMem, hBitmap);
BitBlt(hdcMem, 0, 0, ScreenX, ScreenY, hScreen, 0, 0, SRCCOPY);
SelectObject(hdcMem, hOld);
GetDIBits(hdcMem, hBitmap, 0, ScreenY, ScreenData, (BITMAPINFO*)&bmi, DIB_RGB_COLORS);
DeleteDC(hdcMem);
DeleteObject(hBitmap);
}
int iEndTc = ::GetTickCount();
printf( "%d ms", (iEndTc - iBegTc) / 100 );
system("PAUSE");
ReleaseDC(GetDesktopWindow(),hScreen);
return 0;
}
My problem is that the code within the loop takes too long too execute. In my case it's about 36 ms per iteration.
I am wondering if there are statements that could be done just once and thus put outside of the loop, likI did for the byte buffer. I don't know however which are the ones that I must do for each new image, and which are the ones I can only do one time.
Keep BitBlt and GetDIBits inside the loop, move the rest outside the loop as follows:
HBITMAP hBitmap = CreateCompatibleBitmap(hScreen, ScreenX, ScreenY);
HDC hdcMem = CreateCompatibleDC (hScreen);
HGDIOBJ hOld = SelectObject(hdcMem, hBitmap);
for( int i = 0; i < 100; ++i )
{
BitBlt(hdcMem, 0, 0, ScreenX, ScreenY, hScreen, 0, 0, SRCCOPY);
//hBitmap is updated now
GetDIBits(hdcMem, hBitmap, 0, ScreenY, ScreenData, (BITMAPINFO*)&bmi, DIB_RGB_COLORS);
//wait...
}
SelectObject(hdcMem, hOld);
DeleteDC(hdcMem);
DeleteObject(hBitmap);
In addition bmi.biSizeImage should be set to data size, in this case 4 * ScreenX * ScreenY
This won't make the code noticeably faster. The bottle neck is at BitBlt. It's still about 30 frames/sec, this should be okay unless there is a game or movie on the screen.
You might also try saving to a 24 bit bitmap. It won't make any difference in this code but data size would be smaller ((width * bitcount + 31) / 32) * 4 * height)
The Aero feature of Windows seems to affect the BitBlt speed.
If you iteratively BitBlt even one pixel from the display, it will run at about 30 frames per second, and the CPU usage will be near idle. But if you turn off the Aero feature of Windows, you'll get BitBlt speeds that are remarkably faster.

Is it possible to render antialiased text onto a transparent background with pure GDI?

I've been asking a lot of questions about text aliasing and line aliasing and transparency lately because I wanted to write a platform agnostic vector graphics system for Go; the Windows code is written in C. Premultiplication shenanigans have led me to change the focus over to just rendering text (so I can access system fonts).
Right now I have something that draws text to an offscreen bitmap. This works, except for the antialiased bits. In my code, as I fill the memory buffer with 0xFF to flip the alpha byte (which GDI sets to 0x00 for a pixel that is drawn), the antialiasing is to white. Other people have seen antialiasing to black. This happens with both ANTIALIASED_QUALITY and CLEARTYPE_QUALITY.
I am drawing with TextOut() into a DIB in this case. The DIB is backed by a copy of the screen DC (GetDC(NULL)).
Is there anything I can do to just get text transparent? Can I somehow detect the white pixels, unblend them, and convert that to an alpha? How would I do that for colors too similar to white?
I wrote some code to do this.
The AntialiasedText function draws anti-aliased text onto an off-screen bitmap. It calculates the transparency so that the text can be blended with any background using the AlphaBlend API function.
The function is followed by a WM_PAINT handler illustrating its use.
// Yeah, I'm lazy...
const int BitmapWidth = 500;
const int BitmapHeight = 128;
// Draw "text" using the specified font and colour and return an anti-aliased bitmap
HBITMAP AntialiasedText(LOGFONT* plf, COLORREF colour, LPCWSTR text)
{
BITMAPINFO bmi = {0};
bmi.bmiHeader.biSize = sizeof(bmi.bmiHeader);
bmi.bmiHeader.biWidth = BitmapWidth;
bmi.bmiHeader.biHeight = BitmapHeight;
bmi.bmiHeader.biPlanes = 1;
bmi.bmiHeader.biBitCount = 32;
LPBYTE pBits;
HBITMAP hDIB = CreateDIBSection(0, &bmi, DIB_RGB_COLORS, (LPVOID*)&pBits, 0, 0);
// Don't want ClearType
LOGFONT lf = *plf;
lf.lfQuality = ANTIALIASED_QUALITY;
HFONT hFont = CreateFontIndirect(&lf);
HDC hScreenDC = GetDC(0);
HDC hDC = CreateCompatibleDC(hScreenDC);
ReleaseDC(0, hScreenDC);
HBITMAP hOldBMP = (HBITMAP)SelectObject(hDC, hDIB);
HFONT hOldFont = (HFONT)SelectObject(hDC, hFont);
RECT rect = {0, 0, BitmapWidth, BitmapHeight};
FillRect(hDC, &rect, WHITE_BRUSH);
TextOut(hDC, 2, 2, text, wcslen(text));
// Flush drawing
GdiFlush();
// Calculate alpha
LPBYTE pixel = pBits;
int pixelCount = BitmapWidth * BitmapHeight;
BYTE r = GetRValue(colour);
BYTE g = GetGValue(colour);
BYTE b = GetBValue(colour);
for (int c = 0; c != pixelCount; ++c)
{
// Set alpha
BYTE alpha = 255 - pixel[0];
pixel[3] = alpha;
// Set colour
pixel[0] = b * alpha / 255;
pixel[1] = g * alpha / 255;
pixel[2] = r * alpha / 255;
pixel += 4;
}
SelectObject(hDC, hOldFont);
SelectObject(hDC, hOldBMP);
DeleteDC(hDC);
DeleteObject(hFont);
return hDIB;
}
Here's a WM_PAINT handler to exercise the function. It draws the same text twice, first using TextOut and then using the anti-aliased bitmap. They look much the same, though not as good as ClearType.
case WM_PAINT:
{
LPCWSTR someText = L"Some text";
hdc = BeginPaint(hWnd, &ps);
LOGFONT font = {0};
font.lfHeight = 40;
font.lfWeight = FW_NORMAL;
wcscpy_s(font.lfFaceName, L"Comic Sans MS");
// Draw the text directly to compare to the bitmap
font.lfQuality = ANTIALIASED_QUALITY;
HFONT hFont = CreateFontIndirect(&font);
font.lfQuality = 0;
HFONT hOldFont = (HFONT)SelectObject(hdc, hFont);
TextOut(hdc, 2, 10, someText, wcslen(someText));
SelectObject(hdc, hOldFont);
DeleteObject(hFont);
// Get an antialiased bitmap and draw it to the screen
HBITMAP hBmp = AntialiasedText(&font, RGB(0, 0, 0), someText);
HDC hScreenDC = GetDC(0);
HDC hBmpDC = CreateCompatibleDC(hScreenDC);
ReleaseDC(0, hScreenDC);
HBITMAP hOldBMP = (HBITMAP)SelectObject(hBmpDC, hBmp);
BLENDFUNCTION bf;
bf.BlendOp = AC_SRC_OVER;
bf.BlendFlags = 0;
bf.SourceConstantAlpha = 255;
bf.AlphaFormat = AC_SRC_ALPHA;
int x = 0;
int y = 40;
AlphaBlend(hdc, x, y, BitmapWidth, BitmapHeight, hBmpDC, 0, 0, BitmapWidth, BitmapHeight, bf);
SelectObject(hBmpDC, hOldBMP);
DeleteDC(hBmpDC);
DeleteObject(hBmp);
EndPaint(hWnd, &ps);
}
break;

Changes made to image surface aren't reflected when painting

I have a small code snippet which loads an image from a PNG file, then modifies the image data in memory by making a specific color transparent (setting alpha to 0 for that color). Here's the code itself:
static gboolean expose (GtkWidget *widget, GdkEventExpose *event, gpointer userdata)
{
int width, height, stride, x, y;
cairo_t *cr = gdk_cairo_create(widget->window);
cairo_surface_t* image;
char* ptr;
if (supports_alpha)
cairo_set_source_rgba (cr, 1.0, 1.0, 1.0, 0.0); /* transparent */
else
cairo_set_source_rgb (cr, 1.0, 1.0, 1.0); /* opaque white */
cairo_set_operator (cr, CAIRO_OPERATOR_SOURCE);
cairo_paint (cr);
image = cairo_image_surface_create_from_png ("bg.png");
width = cairo_image_surface_get_width (image);
height = cairo_image_surface_get_height (image);
stride = cairo_image_surface_get_stride (image);
cairo_surface_flush (image);
ptr = (unsigned char*)malloc (stride * height);
memcpy (ptr, cairo_image_surface_get_data (image), stride * height);
cairo_surface_destroy (image);
image = cairo_image_surface_create_for_data (ptr, CAIRO_FORMAT_ARGB32, width, height, stride);
cairo_surface_flush (image);
for (y = 0; y < height; y++) {
for (x = 0; x < width; x++) {
char alpha = 0;
unsigned int z = *((unsigned int*)&ptr [y * stride + x * 4]);
if ((z & 0xffffff) == 0xffffff) {
z = (z & ~0xff000000) | (alpha & 0xff000000);
*((unsigned int*) &ptr [y * stride + x * 4]) = z;
}
}
}
cairo_surface_mark_dirty (image);
cairo_surface_write_to_png (image, "image.png");
gtk_widget_set_size_request (GTK_OBJECT (window), width, height);
gtk_window_set_resizable (GTK_OBJECT (window), FALSE);
cairo_set_source_surface (cr, image, 0, 0);
cairo_paint_with_alpha (cr, 0.9);
cairo_destroy (cr);
cairo_surface_destroy (image);
free (ptr);
return FALSE;
}
When I dump the modified data to PNG, transparency is actually there. But when the same data is used as a source surface for painting, there's no transparency. What might be wrong?
Attachments:
image.png - modified data dumped to file for debugging purposes,
demo.png - actual result
bg.png - source image, is omitted due to stackoverflow restrictions, it's simply black rounded rectangle on the white background. Expected result is black translucent rectangle and completely transparent fields, not white, like these on the demo.png.
Setting alpha to 0 means that the color is completely transparent. Since cairo uses pre-multiplied alpha, you have to set the pixel to 0, since otherwise the color components could have higher values than the alpha channels. I think cairo chokes on those super-luminscent pixels.
So instead of this code:
if ((z & 0xffffff) == 0xffffff) {
z = (z & ~0xff000000) | (alpha & 0xff000000);
*((unsigned int*) &ptr [y * stride + x * 4]) = z;
}
You should try the following:
if ((z & 0xffffff) == 0xffffff) {
*((unsigned int*) &ptr [y * stride + x * 4]) = 0;
}
And while we are at it:
Doesn't (z & 0xffffff) == 0xffffff check if the green, blue and alpha channels are all at 100% and ignores the red channel? Are you sure that's really what you want? z == 0xffffffff would be opaque white.
Instead of using unsigned int, it would be better if you used uint32_t for accessing the pixel data. Portability!
Your code assumes that cairo_image_surface_create_from_png() always gives you an image surface with format ARGB32. I don't think that's necessarily always correct and e.g. RGB24 is possible as well.
I think I would do something like this:
for (y = 0; y < height; y++) {
uint32_t row = (uint32_t *) &ptr[y * stride];
for (x = 0; x < width; x++) {
uint32_t px = row[x];
if (is_expected_color(px))
row[x] = 0;
}
}

Resources