WritableBitmap: Memory allocation mismatch - How to solve? - wpf

In my program I try to read a color palette image's (FormatConvertedBitmap with format Indexed4) pixels using a WritableBitmap.
The dimensions of the resulting image are 20 * 27 pixels.
The color palette is 16 colors, so each color takes a nibble and each byte carries two pixels.
However, in memory, WPF adds an additional DWORD to the end of each pixel row:
I'm not able to address particular pixels with this mismatch.
What BitmapSource property do I need to examine to be able to compute the correct stride of this image?

A WPF BitmapSource may use whatever stride it prefers, provided that it can hold the values of all pixels in a row. In particular, it may adjust the stride for whatever kind of optimized memory access.
The size of the underlying buffer is calculated by multiplying the stride with the number of rows:
WritableBitmap bitmap = ...
var bufferSize = bitmap.BackBufferStride * bitmap.PixelHeight;

Related

way to know yuv details (dimensions, formats and types)

I have a input.yuv image which I wants to use in my code as a input.
But I want to know whether it is 422,420 or 444 format and also wants to know whether it is planner and packed and what is its width, height and stride.
When I saw this image using https://rawpixels.net/ tool, I can see the perfect image with gray scale with dimensions 1152x512. But when I do with yuv420p or other options, the color and luma components are not with correct offset and hence showing the mixture of color and gray scale image with different offset(2 images in same screen).
Is there any way to write a C code to find above mentioned yuv details (dimensions, formats and types) ?
Not really. Files with a .yuv extension just contain raw pixel data normally in a planar format.
That would typically be width * height of luma pixels followed by either width/2height/2 (420) or widthheight/2 (422) Cb and Cr components.
They cam be 8 or 10 bits per pixel with 10 bits per pixel usually stored in 2 bytes. It's just really a case of trial and error to find out what it is.
Occasionally you find all sorts of arrangements of Y, Cb, Cr in files with a .yuv extension. Planar is most common though.

Finding max-min pixel luminance on screen/in texture without GLSL support

In my 2D map application, I have 16-bit heightmap textures containing altitudes in meters associated to a point on the map.
When I draw these textures on the screen, I would like to display an analysis such that the pixel referring to the highest altitude on the screen is white, the pixel referring to the lowest altitude in the screen is black and the values in-between are interpolated between those two.
I'm using an older OpenGL version and thus do not have access to modern pipeline functionality like GLSL or PBO (Which somehow can make getting color buffer contents to CPU side much more efficient than glReadPixels, as I've heard).
I have access to ATI_fragment_shader extension which makes possible to use a basic fragment shader to merge R and G channels in these textures and get a single float grayscale luminance value.
Then I would've been able to re-color these pixels again inside shader (Map them to 0-1 range) based on maximum and minimum pixel luminance values but I don't know what they are.
My question is, between the pixels currently on the screen, how do I find the pixels with maximum and minimum luminance values? Or as an alternative, how do I find these values inside a texture? (Because I could make a glCopyTexImage2D call after drawing the texture with grayscale luminance values on the screen and retrieve the data as a texture).
Stuff I've tried or read about so far:
-If I could somehow get current pixel RGB values in the color buffer to CPU side, I could find what I need manually and then use them. However, reading color buffer contents with glReadPixels is unacceptably slow. It's no use even if I set it up so that it completes one read operation over multiple frames.
-Downsampling the texture to 1x1 size until the last standing pixel is either minimum or maximum value and then using this 1x1 texture inside shader. I have no idea how to achieve this without GLSL and texel fetching support since I would have to look up the pixel which is to the right, up and up-right of the current one and find a min/max value between them.

Sprite alignment in a Sprite Packing C application

I am creating the "perfect" sprite packer. This is a sprite packer that makes sure the output sprite is compatible with most if not all game engines and animation software. It is a program that merges images into a horizontal sprite sheet.
It converts (if needed) the source frames to BMP in memory
It considers the top-left pixel fully transparent for the entire image (can be configured)
It parses the frames each individually to find the real coordinates rect (where the actual frame starts, ends, its width and height (sometimes images may have a lot of extra transparent pixels).
It determines the frame box, which have the width and height of the frame with the largest width/height so that it is long enough to contain every frame. (For extra compatibility, every frame must have the same dimensions).
Creates output sprite with width of nFrames * wFrameBox
The problem is - anchor alignment. Currently, it tries to align each frame so that its center is on the frame box center.
if((wBox / 2) > (frame->realCoordinates.w / 2))
{
xpos = xBoxOffset + ((wBox / 2) - (frame->realCoordinates.w / 2));
}
else
{
xpos = xBoxOffset + ((frame->realCoordinates.w / 2) - (wBox / 2));
}
When animated, it looks better with it, but there is still this inconsistent horizontal frame position so that a walking animation looks like walking and shaking.
I also tried the following:
store the real x pixel position of the widest frame and use it as a reference point:
xpos = xBoxOffset + (frame->realCoordinates.x - xRef);
It also gives a little better results, showing that this is still not the correct algorithm.
Honestly, I don't know what am I doing.
What will be the correct way to align sprite frames (obtain the appropriate x position for drawing the next frame) given that the output sprite sheet have width of the number of frames multiplied by the width of the widest frame?
Your problem is that you first calculate the center then calculate the size of the required bounding box. That is why your image 'shakes' because in each image that center is different to the original center.
You should use the center of the original bounding box as your origin, then find out the size of each sprite, keeping track of the leftmost, rightmost, topmost and bottommost non transparent pixels. That would give you the bounding box you need to use to avoid the shaking.
The problem is that you will find that most sprites are already done that way, so the original bounding box is actually defined as to the minimum space to paint the whole sprite's sequence covering these non transparent pixels.
The only way to remove unused sprite space is to store the first sprite complete, and then the origin and dimensions of each other sprite, like is done in animated GIF and APNG ( Animated PNG -> https://en.wikipedia.org/wiki/APNG )

How to superimpose a small bitmap onto a larger bitmap in C at a certain x, y position?

I am trying to write a program that detects pixel collision (overlapping of bits that have a value of 1) between two bitmap images. I know the position of the left side, right side, top and bottom of each bitmap in x and y coordinates relative to the LCD screen. My thinking is that I could superimpose the first bitmap onto a large, blank (0's everywhere) bitmap that is the size of the screen, at its x and y position. Then do the same for the second on it's own canvas. After that I could do a binary & operation of the two bitmaps that are of the same size. If the result is greater than 1, I know some of the pixels have overlapped.
The problem with this is I don't know how to superimpose two bitmaps. Does anyone have experience with this who could offer some advice?
EDIT: We are expected to use bitwise and bit-shifting operations to detect pixel level collision, with a maximum of 1 for loop.

I want to have a set of patches overlay onto an image and want to control their color range?

I have a set of patches I am overlaying onto an image. The below patches draw a grid of boxes over the image. This works when I dont try and restrict the colormap range. But when I try and set it with caxis it does not allow me to use the array of hpatch as a handle. How can I get this to work? or is there a better approach then what I am doing? Also the image is grayscale but I would like the patches to use the jet colormap. Is this possible to do?
hFig = figure;
hAx = axes('Parent',hFig);
for i = 1:256
hpatch(i) = patch([x2(i+17) x2(i+18) x2(i+1) x2(i)],[y2(i+17) y2(i+18) y2(i+1) y2(i)],[0 0 0 0],'Parent',hAx, 'FaceColor','flat','CData',cdata(i),'CDataMapping','scaled', 'FaceAlpha',(0.5));
end
caxis(hpatch,[0 25])
Here's one problem: the colormap is a figure-level property. When using the CAXIS function, you have to pass it an axes handle, not a patch handle, and specify a range that determines how the color data values in that set of axes are mapped to the colormap.
If your axes have an indexed-color image or other objects with their 'CDataMapping' property set to 'direct' or 'scaled', then it may get quite messy trying to make one colormap to accommodate them all. You would have to concatenate their colormaps into one to use for the figure, then adjust their color value indices accordingly. Changing the scaling for any one object would be more involved than just using CAXIS: you'd have to modify the corresponding section of the colormap for that object or modify its colormap indices stored in the 'CData' property.
However, you can simplify the problem by making sure only one object (or related set of objects) uses the colormap, setting all the other objects to use fixed RGB values for their 'CData'. Since you mention that your image is grayscale, it would be best to make it a Truecolor (i.e. RGB) image (if it isn't already) so that only your patches use the colormap. Here's how you can convert your image:
If the image you're plotting is an M-by-N-by-3 matrix, it's already an RGB image.
If the image is an M-by-N matrix and has an associated colormap, use the IND2RGB function to convert it to an RGB image.
If the image is an M-by-N matrix with no colormap, then IMSHOW will still display it using the figure colormap. To convert it to an RGB image, first apply whatever windowing you want to the 2-D image data, then replicate the data in the third dimension to make it an M-by-N-by-3 matrix. Here's an example (assuming img is your image data):
limits = [0.05 0.4]; %# The window you want to apply to the data
img = (img-limits(1))./diff(limits); %# Scale the data
img(img < 0) = 0; %# Clip the data outside 0 and 1 (even without these two
img(img > 1) = 1; %# steps, IMSHOW should display the data properly)
img = repmat(img,[1 1 3]); %# Replicate the image so it is 3-D
imshow(img); %# Display the image
Once you've converted and plotted the image and plotted your patches as above, you should be able to use a jet colormap for the patches like so:
colormap(jet);
caxis(hAx,[0 25]);

Resources