I'm wondering what values to change in a PIXELFORMATDESCRIPTOR object to change the color depth.
According to the OpenGL wiki, this is how you'd create a PIXELFORMATDESCRIPTOR object for an OpenGL context:
PIXELFORMATDESCRIPTOR pfd =
{
sizeof(PIXELFORMATDESCRIPTOR),
1,
PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER, //Flags
PFD_TYPE_RGBA, //The kind of framebuffer. RGBA or palette.
32, //Colordepth of the framebuffer.
0, 0, 0, 0, 0, 0,
0,
0,
0,
0, 0, 0, 0,
24, //Number of bits for the depthbuffer
8, //Number of bits for the stencilbuffer
0, //Number of Aux buffers in the framebuffer.
PFD_MAIN_PLANE,
0,
0, 0, 0
};
But it has different variables effecting the color depth.
Which ones do I need to change to adjust the color depth accordingly?
The first number, 32 in your particular example specifies the amount of color bitplanes available to the framebuffer. The other numbers define the numbers of bitplanes to use for each component. It's perfectly possible to fit a 5-6-5 pixelformat into a 32 bitplanes framebuffer, which is a valid choice.
When you pass a PIXELFORMATDESCRIPTOR to ChoosePixelFormat the values are takes as minimum values. However the algorithm used by ChoosePixelFormat may not deliver an optimal result for your desired application. It can then be better to enumerate all available pixelformats and choose from them using a custom set of rules.
Related
I am trying to create dummy context so I can do rendering with cairo-gl and then I can copy that surface to surface image buffer.
This code below is just a test so I can evaluate if openGL backed works.
So here is what I am doing, but I always get an error when creating cairo_device. Error is thrown at _cairo_gl_dispatch_init_buffers.
EDIT: Actually cairo can't get the _cairo_gl_get_version and _cairo_gl_get_flavor.
HDC hdc = GetDC((HWND)pGraphics->GetWindow());
PIXELFORMATDESCRIPTOR pfd = {
sizeof(PIXELFORMATDESCRIPTOR), // size of this pfd
1, // version number
PFD_DRAW_TO_WINDOW | // support window
PFD_SUPPORT_OPENGL | // support OpenGL
PFD_DOUBLEBUFFER, // double buffered
PFD_TYPE_RGBA, // RGBA type
24, // 24-bit color depth
0, 0, 0, 0, 0, 0, // color bits ignored
0, // no alpha buffer
0, // shift bit ignored
0, // no accumulation buffer
0, 0, 0, 0, // accum bits ignored
32, // 32-bit z-buffer
0, // no stencil buffer
0, // no auxiliary buffer
PFD_MAIN_PLANE, // main layer
0, // reserved
0, 0, 0 // layer masks ignored
};
//HDC hdc;
int iPixelFormat;
// get the best available match of pixel format for the device context
iPixelFormat = ChoosePixelFormat(hdc, &pfd);
// make that the pixel format of the device context
SetPixelFormat(hdc, iPixelFormat, &pfd);
////// create a rendering context
HGLRC hglrc = wglCreateContext(hdc);
// Test openGL
cairo_surface_t *surface_gl;
cairo_t *cr_gl;
cairo_device_t *cairo_device = cairo_wgl_device_create(hglrc);
surface_gl = cairo_gl_surface_create_for_dc(cairo_device, hdc, 500, 500);
cr_gl = cairo_create(surface_gl);
cairo_set_source_rgb(cr_gl, 1, 0, 0);
cairo_paint(cr_gl);
cairo_set_source_rgb(cr_gl, 0, 0, 0);
cairo_select_font_face(cr_gl, "Sans", CAIRO_FONT_SLANT_NORMAL,
CAIRO_FONT_WEIGHT_NORMAL);
cairo_set_font_size(cr_gl, 40.0);
cairo_move_to(cr_gl, 10.0, 50.0);
cairo_show_text(cr_gl, "openGL test");
cairo_surface_write_to_png(surface_gl, "C:/Users/Youlean/Desktop/imageGL.png");
cairo_destroy(cr_gl);
cairo_surface_destroy(surface_gl);
I have a three-dimensional array of binary numbers, which I use as a dictionary and pass through an LED array. The dictionary covers 27 letters, and each letter covers 30x30 pixels (where each pixel is a 0 or a 1).
I was using the Intel Edison - and the code worked well - but I ditched the Edison after having trouble connecting it to my PC (despite replacing it once). I switched to the Arduino Uno, but am now receiving an error that the array is too large.
Right now I have the array set as boolean. Is there anyway to reduce the memory demands of the array by setting it instead as bits? The array consists of just zeros and ones.
Here's a snip of the code:
boolean PHDict[27][30][30] = {
/* A */ {{ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, /* this is one column of thirty, that show "A" as a letter */
You could write it as
#include <stdint.h>
//...
uint32_t PHdict[27][30] = {
{ 0x00004000, ... },
....
};
.. Where each entry contains 30 bits packed into a 32-bit number.
The size is under 4k bytes.
You would need a bit of code to unpack the bits when reading the array, and a way to generate the packed values (I.e a program which runs on your "host" computer, and generates the initialized array for the source code)
For the AVR processor, there's also a way to tell the compiler you want the array stored in PM (Flash memory) instead of DM - I think if you have it in DM, the compiler will need to put the initialization data in PM anyway, and copy it over before the program starts, so it's a good idea to explicitly store it in PM. See https://gcc.gnu.org/onlinedocs/gcc/AVR-Variable-Attributes.html#AVR-Variable-Attributes
In fact, depending on the amount of flash memory in the processor, changing it to PM may be sufficient to solve the problem, without needing to pack the bits.
I'm trying to upload a texture with unsigned shorts in a shader but it's not working.
I have tried the following:
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, vbt[1]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 640, 480, 0, GL_RED, GL_UNSIGNED_SHORT, kinect_depth);
glUniform1i(ptexture1, 1);
GLenum ErrorCheckValue = glGetError();
I know I'm binding correctly the texture because I get some results by using
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, vbt[1]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 640, 480, 0,
GL_RG, GL_UNSIGNED_BYTE, kinect_depth);
glUniform1i(ptexture1, 1);
GLenum ErrorCheckValue = glGetError();
In particular, I get part of my values in the red channel. I would like to upload the texture as a unsigned byte or as a float. However I don't manage to get the glTexImage2D call correctly. Also, is it possible to something similar using a depth texture? I would like to do some operations on the depth information I get from a kinect and display it.
Your arguments to glTexImage2D are inconsistent. The 3rd argument (GL_RGB) suggests that you want a 3 component texture, the 7th (GL_RED) suggests a one-component texture. Then your other attempt uses GL_RG, which suggests 2 components.
You need to use an internal texture format that stores unsigned shorts, like GL_RGB16UI.
If you want one component, your call would look like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16UI, 640, 480, 0, GL_RED_INTEGER, GL_UNSIGNED_SHORT, kinect_depth);
If you want three components:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16UI, 640, 480, 0, GL_RGB_INTEGER, GL_UNSIGNED_SHORT, kinect_depth);
You also need to make sure that the types used in your shader for sampling the texture match the type of the data stored in the texture. In this example, since you use a 2D texture containing unsigned integer values, your sampler type should be usampler2D, and you want to store the result of the sampling operation (result of texture() call in the shader) in a variable of type uvec4. (paragraph added based on suggestion by Andon)
Some more background on the format/type arguments of glTexImage2D, since this is a source of fairly frequent misunderstandings:
The 3rd argument (internalFormat) is the format of the data that your OpenGL implementation will store in the texture (or at least the closest possible if the hardware does not support the exact format), and that will be used when you sample from the texture.
The last 3 arguments (format, type, data) belong together. format and type describe what is in data, i.e. they describe the data you pass into the glTexImage2D call.
It is mostly a good idea to keep the two formats matched. Like in this case, the data you pass in is GL_UNSIGNED_SHORT, and the internal format GL_R16UI contains unsigned short values. In OpenGL ES it is required for the internal format to match format/type. Full OpenGL does conversion if necessary, which is undesirable for performance reasons, and also frequently not what you want because the precision of the data in the texture won't be the same as the precision of your original data.
I think I've thoroughly searched the forums, unless I left out certain keywords in my search string, so forgive me if I've missed a post. I am currently using OpenCV 2.4.0 and I have what I think is just a simple problem:
I am trying to take in an unsigned character array (8 bit, 3 channel) that I get from another API and put that into an OpenCV matrix to then view it. However, all that displays is an image of the correct size but a completely uniform gray. This is the same color you see when you specify the incorrect Mat name to be displayed.
Have consulted:
Convert a string of bytes to cv::mat (uses a string inside of array) and
opencv create mat from camera data (what I thought was a BINGO!, but can't seem to get to display the image properly).
I took a step back and just tried making a sample array (to eliminate the other part that supplies this array):
int main() {
bool isCamera = true;
unsigned char image_data[] = {255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255};
cv::Mat image_as_mat(Size(6,3),CV_8UC3,image_data);
namedWindow("DisplayVector2",CV_WINDOW_AUTOSIZE);
imshow("DisplayVector2",image_as_mat);
cout << image_as_mat << endl;
getchar();
}
So I am just creating a 6x3 matrix, with the first row being red pixels, the second row being green pixels, and third row being blue. However this still results in the same blank gray image but of correct size.
The output of the matrix is (note the semicolons i.e. it formatted it correctly):
[255, 0, 0, 255, 0, 0, 255, 0, 0, 255, 0, 0, 255, 0, 0, 255, 0, 0; 0, 255, 0, 0, 255, 0, 0, 255, 0, 0, 255, 0, 0, 255, 0, 0, 255, 0; 0, 0, 255, 0, 0, 255, 0, 0, 255, 0, 0, 255, 0, 0, 255, 0, 0, 255]
I might be crazy or missing something obvious here. Do I need to initialize something in the Mat to allow it to display properly? Much appreciated as always for all your help everyone!
all the voodoo here boils down to calling getchar() instead of (the required) waitKey()
let me explain, waitKey might be a misnomer here, but you actually need it, as the code inside wraps the window's messageloop, which triggers the actual blitting (besides waiting for keypresses).
if you don't call it, your window will never get updated, and will just stay grey ( that's what you observe here )
indeed, you should have trusted the result from cout , your Mat got properly constructed, it just did not show up in the namedWindow
(btw, getchar() waits for a keypress from the console window, not your img-window)
hope it helps, happy hacking further on ;)
I have two png images First one with Width1 2247 Height1 190 and second one with Width2 155 Height2 36. I wan't the second image(src) to be placed in the center of first image(dest). I created pixel buf of both and used gdk_pixbuf_composite as follows.
gdk_pixbuf_composite( srcpixbuf, dstpixbuf, 1000, 100, width2, height2, 0, 0, 1, 1, GDK_INTERP_BILINEAR, 255);
I get a hazy window of width2 and height2 on the first image.
If I replace width2 and height2 with 1.0 then I don't get the srcimage on the dstimage. Where am I going wrong?
gdk_pixbuf_composite( srcpixbuf, dstpixbuf, 1000, 100, width2, height2, 1000, 100, 1, 1, GDK_INTERP_BILINEAR, 255);
This solved. Wrongly understood the offset parameter. Basically an intermediate scaled image is created and only the part represented by the dest wid, height is composited. So in my case we need to move the entire unscaled image to the destination offset which is done by the offset parameter.