I am trying to create an HBITMAP from an array which will contain the color values for the Pixels. The thing is when I try to create a 24-bpp Bitmap, the CreateDIBItmap is using BGR values instead of RGB as I would like.
The code to create the Bitmap is as follows:
image_size = 600 * 600 * 3;
aimp_buffer = (char *)malloc(image_size * sizeof(char));
for (counter = 0; counter < image_size;)
{
aimp_buffer[counter++] = 255;
aimp_buffer[counter++] = 0;
aimp_buffer[counter++] = 0;
}
ads_scrbuf->avo_buffer = (void *)aimp_buffer;
ads_scrbuf->im_height = 600;
ads_scrbuf->im_width = 600;
ads_scrbuf->im_scanline = 600;
memset(&info, 0, sizeof(info));
memset(&info.bmiHeader, 0, sizeof(info.bmiHeader));
info.bmiHeader.biBitCount = 24;
info.bmiHeader.biHeight= -600;
info.bmiHeader.biWidth= 600;
info.bmiHeader.biSize = sizeof(info.bmiHeader);
info.bmiHeader.biPlanes = 1;
info.bmiHeader.biCompression = BI_RGB;
memset(&header, 0, sizeof(BITMAPV5HEADER));
header.bV5Width = 600;
header.bV5Height = 600;
header.bV5BitCount = 24;
header.bV5Size = sizeof(BITMAPV5HEADER);
header.bV5Planes = 1;
header.bV5Compression = BI_RGB;
*adsp_hBitmap = CreateDIBitmap(GetDC(ds_apiwindow), (BITMAPINFOHEADER *)&header,
CBM_INIT, (void *)ads_scrbuf->avo_buffer, &info, DIB_RGB_COLORS)
This should create a Red background for all of the image, but instead it is blue.
The Windows convention for DIB bitmaps is BGR. You can't change that. You will simply have to adapt to it.
If you load for instance *.bmp file to memory or rather you make a variable lets say DWORD cRef = 0xFF0000 and fill a memory with it, in second case you will see RED color, so the byte order is BGR in both cases (seen as 0xRRGGBB value in source code editor for mentioned variable). But! Try to call e.g. SetTextColor(hDc, cRef) or so. The very same value will be BLUE, so it will be a hell of an adaptation, because Windows convention for DIB bitmaps is just the opposite of Windows convention for e.g. HBRUSH objects. I'd really wonder in which way is this useful..
Related
i do have the following Code:
private static void AddElements(Canvas canvas)
{
double canvasHeight = canvas.Height;
double canvasWidth = canvas.Width;
double y0 = canvasHeight / 2;
double x0 = canvasWidth / 2;
// Defining the new Coordinate-Point (0,0) to mid auf Canvas
TranslateTransform tt = new TranslateTransform(x0, y0);
Line line1 = new Line();
line1.X1 = -350;
line1.Y1 = 0;
line1.X2 = 350;
line1.Y2 = 0;
line1.Stroke = Brushes.Black;
line1.StrokeThickness = 2.0;
line1.RenderTransform = tt;
canvas.Children.Add(line1);
Line line2 = new Line();
line2.X1 = 0;
line2.Y1 = -350;
line2.X2 = 0;
line2.Y2 = 350;
line2.Stroke = Brushes.Black;
line2.StrokeThickness = 2.0;
line2.RenderTransform = tt;
canvas.Children.Add(line2);
Label lblN = new Label();
lblN.Width = 50;
lblN.Background = Brushes.Red;
lblN.Margin = new System.Windows.Thickness(0, -350, 0, 0);
lblN.Content = $"N";
lblN.HorizontalContentAlignment = System.Windows.HorizontalAlignment.Center;
lblN.VerticalContentAlignment = System.Windows.VerticalAlignment.Center;
lblN.RenderTransform = tt;
lblN.Padding = new System.Windows.Thickness(0);
lblN.BorderBrush = Brushes.Black;
lblN.BorderThickness = new System.Windows.Thickness(2.0);
lblN.RenderTransform = tt;
canvas.Children.Add(lblN);
Label lblS = new Label();
lblS.Width = 50;
lblS.Background = Brushes.Red;
lblS.Margin = new System.Windows.Thickness(0, 350, 0, 0);
lblS.Content = $"S";
lblS.HorizontalContentAlignment = System.Windows.HorizontalAlignment.Center;
lblS.VerticalContentAlignment = System.Windows.VerticalAlignment.Center;
lblS.RenderTransform = tt;
lblS.Padding = new System.Windows.Thickness(0);
lblS.BorderBrush = Brushes.Black;
lblS.BorderThickness = new System.Windows.Thickness(2.0);
lblS.RenderTransform = tt;
canvas.Children.Add(lblS);
}
this method is called on an Menu-Eventhandler and it shows an coordinate system with (0,0) in the mid of the canvas. It should show a label with "N" at the top and a label with "S" at the bottom.
But i shows the attached image
Does anyone know, why lblN looks different than lblS ?
best regards
Volkhard
=============
if i set the height of both Label-Objects to 15
lblN.Height=15
:
lblS.Height=15
i get the following:
i expected the lblN to be more upper on the y-coordinate.
What's causing it
Through a bit of testing, I can definitely say that it's the lblN.Margin = new System.Windows.Thickness(0, -350, 0, 0); that's causing the problem. Apparently, when you give a Label a negative margin like that, it will move upwards only as far is it's Height, and then it will start expanding instead of just continuing to move. So you end up with a Label that's 350 tall. We could try to figure out why that is, but really, that would be missing the point.
Admittedly, I don't have any direct documentation to back up the following statement this, but from years of experience in WPF I feel I can say:
Margin is intended to be used to give space between elements in a dynamic layout, not to give an element an absolute position.
This behavior of the Label seems to strengthen the idea that using Margin in this way was not something that was planed for by the designers.
What you should do instead
Canvas has tools for giving an element a set position, yet nowhere do you use Canvas's SetLeft, SetTop, SetRight, or SetBottom. Take a look at the example on MSDN. You shouldn't need to use a TranslateTransform or set Margin at all. Instead, you should calculate where you want the element to be and use one of the above four listed methods to assign that position.
Extra Tip
Don't use canvas.Height and canvas.Width, use canvas.ActualHeight and canvas.ActualWidth instead. The first pair only work if you are explicitly setting the size of the Canvas (which it seems you are). But in a senario where the Canvas is dynamically sized, the first pair will be NaN. The second pair always return the actual size that the Canvas is.
This doesn't make a difference in your current use case, but it might later on. If you're doing calculations based on the actual size of an element (as opposed to the size you might want it to be), always use ActualHeight and ActualWidth.
Is it possible to modify the colors of an Image in WPF via code (or even using Templates)?
Suppose I have an image which I need to apply to a Tile - which will have a White Foreground color by default and a Transparent Background. Something like the following PNG (it is somewhere here!):
Instead of adding different images - with different colors, I just want to manipulate the White - and say change it to Black.
If it can be done, can someone give me a few pointers on what I need to do/look into.
One way to do this would be to use the BitmapDecoder class to retrieve the raw pixel data. You can then modify the pixels, and build a new WriteableBitmap from that modified pixel data:
// Copy pixel colour values from existing image.
// (This loads them from an embedded resource. BitmapDecoder can work with any Stream, though.)
StreamResourceInfo x = Application.GetResourceStream(new Uri(BaseUriHelper.GetBaseUri(this), "Image.png"));
BitmapDecoder dec = BitmapDecoder.Create(x.Stream, BitmapCreateOptions.None, BitmapCacheOption.Default);
BitmapFrame image = dec.Frames[0];
byte[] pixels = new byte[image.PixelWidth * image.PixelHeight * 4];
image.CopyPixels(pixels, image.PixelWidth*4, 0);
// Modify the white pixels
for (int i = 0; i < pixels.Length/4; ++i)
{
byte b = pixels[i * 4];
byte g = pixels[i * 4 + 1];
byte r = pixels[i * 4 + 2];
byte a = pixels[i * 4 + 3];
if (r == 255 &&
g == 255 &&
b == 255 &&
a == 255)
{
// Change it to red.
g = 0;
b = 0;
pixels[i * 4 + 1] = g;
pixels[i * 4] = b;
}
}
// Write the modified pixels into a new bitmap and use that as the source of an Image
var bmp = new WriteableBitmap(image.PixelWidth, image.PixelHeight, image.DpiX, image.DpiY, PixelFormats.Pbgra32, null);
bmp.WritePixels(new Int32Rect(0, 0, image.PixelWidth, image.PixelHeight), pixels, image.PixelWidth*4, 0);
img.Source = bmp;
This works after a fashion, but there's a problem. Here's how the result looks if I show it on a dark background:
As you can see, it's got a sort of white border. What's happened here is that your white cross had anti-aliased edges, meaning that the pixels around the edges are actually a semi-transparent shade of grey.
We can deal with that using a slightly more sophisticated technique in the pixel modification loop:
if ((r == 255 &&
g == 255 &&
b == 255 &&
a == 255) ||
(a != 0 && a != 255 &&
r == g && g == b && r != 0))
{
// Change it to red.
g = 0;
b = 0;
pixels[i * 4 + 1] = g;
pixels[i * 4] = b;
}
Here's how that looks on a black background:
As you can see, that looks right. (OK, you wanted black not red, but the basic approach will be the same for any target colour.)
EDIT 2015/1/21 As ar_j pointed out in the comments, the Prgba format requires premultiplication. For the example I've given it is actually safe to ignore it, but if you were modifying colour channels in any way other than by setting them to 0, you'd need to multiple each value by (a/255). E.g., as aj_j shows for the G channel: pixels[i * 4 + 1] = (byte)(g * a / 255); Since g is zero in my code, this makes no difference but for non-primary colours you would need to do it that way.
Here it is on a gradient fill background just to show that the transparency is working:
You could also write out the modified version:
var enc = new PngBitmapEncoder();
enc.Frames.Add(BitmapFrame.Create(bmp));
using (Stream pngStream = File.OpenWrite(#"c:\temp\modified.png"))
{
enc.Save(pngStream);
}
Here's the result:
You can see the red cross, and it'll be on top of whatever background colour StackOverflow is using. (White, as I write this, but maybe they'll redesign one day.)
Whether this will work for the images you want to use is harder to know for certain, because it depends on what your definition of 'white' is - depending on how your images were produced, you may find things are ever so slightly off-white (particularly around the edges), and you may need further tweaking. But the basic approach should be OK.
I'm trying to make a simple image viewer. I basically load an image into a surface and then create a texture from it.
At the end, I do the usual SDL_RenderClear(), SDL_RenderCopy() and SDL_RenderPresent() as per the migration guide.
This works fine, except that if I call SDL_UpdateTexture() before the 3 render calls above, I get a messed up image:
I am calling SDL_UpdateTexture() like this:
SDL_UpdateTexture(texture, NULL, image->pixels, image->pitch)
Where image is the surface I loaded for the image and texture is the texture I created from that. Attempts to vary the pitch result in differently messed up images. I also tried using a rect for the second parameter, but results are the same if the rect has the same dimensions as the image. If the dimensions are larger (e.g. same as the window), the update doesn't happen, but there are no errors.
The full code is available.
I would like to manipulate pixels of the surface directly via image->pixels and then call SDL_UpdateTexture(), but just calling SDL_UpdateTexture() without any tampering is enough to mess things up.
I think there is something wrong with the pitch or the SDL_Rect parameters,
but there is another SDL function which might help:
SDL_Texture* SDL_CreateTextureFromSurface(SDL_Renderer* renderer,
SDL_Surface* surface)
Could you maybe try the following. It should replace any pink (r=255,g=0,b=255) pixels to be transparent. You would simply change the pixel32 manipulation to accommodate your needs.
SDL_Surface* image = IMG_Load(filename);
SDL_Surface* imageFomatted = SDL_ConvertSurfaceFormat(image,
SDL_PIXELFORMAT_RGBA8888,
NULL);
texture = SDL_CreateTexture(renderer,
SDL_PIXELFORMAT_RGBA8888,
SDL_TEXTUREACCESS_STREAMING,
imageFomatted->w, imageFomatted->h);
void* pixels = NULL;
int pitch = 0;
SDL_LockTexture(texture, &imageFomatted->clip_rect, &pixels, &pitch);
memcpy(pixels, imageFomatted->pixels, (imageFomatted->pitch * imageFomatted->h));
int width = imageFomatted->w;
int height = imageFomatted->h;
Uint32* pixels32 = (Uint32*)pixels;
int pixelCount = (pitch / 4) * height;
Uint32 colorKey = SDL_MapRGB(imageFomatted->format, 0xFF, 0x00, 0xFF);
Uint32 transparent = SDL_MapRGBA(imageFomatted->format, 0xFF, 0x00, 0xFF, 0x00);
for (int i = 0; i < pixelCount; i++) {
if (pixels32[i] == colorKey) {
pixels32[i] = transparent;
}
}
SDL_UnlockTexture(texture);
SDL_FreeSurface(imageFormatted);
SDL_FreeSurface(image);
pixels = NULL;
pitch = 0;
width = 0;
height = 0;
I am trying to find out how to use FJCore to encode a WriteableBitmap to a jpeg. I understand that WriteableBitmap provides the raw pixels but I am not sure how to convert it to the format that FJCore expects for its JpegEncoder method. JpegEncoder has two overloads, one takes a FluxJpeg.Core.Image and the other takes in a DecodedJpeg.
I was trying to create a FluxJpeg.Core.Image but it expects a byte[][,] for the image data. byte[n][x,y] where x is width and y is height but I don't know what n should be.
I thought that n should be 4 since that would correspond to the argb info encoded in each pixel but when I tried that FJCore throws an argument out of range exception. Here is what I tried. Raster is my byte[4][x,y] array.
raster[0][x, y] = (byte)((pixel >> 24) & 0xFF);
raster[1][x, y] = (byte)((pixel >> 16) & 0xFF);
raster[2][x, y] = (byte)((pixel >> 8) & 0xFF);
raster[3][x, y] = (byte)(pixel & 0xFF);
Figured it out! I downloaded FJCore from code.google.com and went through the image class. It only expects the RGB bytes. Here is the function that I wrote. I need the base64 version of the image so that's what my function returns.
private static string GetBase64Jpg(WriteableBitmap bitmap)
{
int width = bitmap.PixelWidth;
int height = bitmap.PixelHeight;
int bands = 3;
byte[][,] raster = new byte[bands][,];
for (int i = 0; i < bands; i++)
{
raster[i] = new byte[width, height];
}
for (int row = 0; row < height; row++)
{
for (int column = 0; column < width; column++)
{
int pixel = bitmap.Pixels[width * row + column];
raster[0][column, row] = (byte)(pixel >> 16);
raster[1][column, row] = (byte)(pixel >> 8);
raster[2][column, row] = (byte)pixel;
}
}
ColorModel model = new ColorModel { colorspace = ColorSpace.RGB };
FluxJpeg.Core.Image img = new FluxJpeg.Core.Image(model, raster);
MemoryStream stream = new MemoryStream();
JpegEncoder encoder = new JpegEncoder(img, 90, stream);
encoder.Encode();
stream.Seek(0, SeekOrigin.Begin);
byte[] binaryData = new Byte[stream.Length];
long bytesRead = stream.Read(binaryData, 0, (int)stream.Length);
string base64String =
System.Convert.ToBase64String(binaryData,
0,
binaryData.Length);
return base64String;
}
This code is fine and it should work. I am using same code to send image stream to server via web service and than regenerate image using these bytes...you can save these bytes to Db also
[WebMethod]
public string SaveImage(string data, string fileName)
{
byte[] imageBytes = System.Convert.FromBase64String(data);
MemoryStream mem = new MemoryStream();
mem.Write(imageBytes, 0, imageBytes.Length);
System.Drawing.Image img = System.Drawing.Image.FromStream(mem);
img.Save("D:\\FinalTest.jpg");
return "Saved !";
}
Sounds like [n] should be the byte-array of the image, I have been looking into encoding WriteableBitmap into a JPEG and found the same library but have not looked into it in detail, but assume this would be the case, will add more later to this answer to see if this works, as I have not have the chance to try it yet. There will be some method to get the bytes of a WritableBitmap in Silverlight I guess as it is possible to save to other types.
This problem must have been solved a million times, but Google has not been my friend.
I need to programmatically space a set of boxes to fill a certain length and be separated by a certain distance.
This is what I want:
alt text http://img257.imageshack.us/img257/3362/spacingiwant.png
Here is what I'm getting:
alt text http://img194.imageshack.us/img194/3506/spacingiget.png
Since I'm working in Objective-C using Core Graphics, I need a series of Rects that I can draw or drop an image into. My naive attempt draws a set of boxes with a certain spacing but leaves a space at the end.
Here is my code, which is in a drawRect: method
CGContextRef context = UIGraphicsGetCurrentContext();
CGFloat barStartX = 96;
CGFloat barStartY = 64.0;
CGFloat barWidth = 16;
CGFloat barHeight = 64;
CGFloat barGutter = 8;
int barSegments = 8;
for (int segmentNumber = 0; segmentNumber <= (barSegments - 1); ++segmentNumber) {
// get the box rect
CGRect segment = CGRectMake(barStartX + (barWidth * segmentNumber), barStartY , barWidth - barGutter, barHeight);
// plot box
CGContextFillRect(context, segment);
}
Before I create an impenetrable monstrosity of one-off code that even I won't understand 6 months from now, I'm wondering if there is a general solution to this spacing problem.
The answer doesn't have to be in Objective-C, as long as it's somewhat C-like. Readability has priority over performance considerations.
I think that despite your effort, this question is a bit unclear. Here's my attempt, though.
The equation that you describes in the title is:
N*x + (N-1)*S = L
Solving that for x gives us the width necessarry for each box:
x = (L - (N-1)*S) / N
This would result in code similar to something like this:
CGContextRef context = UIGraphicsGetCurrentContext();
int barSegments = 8;
CGFloat barStartX = 96;
CGFloat barStartY = 64.0;
CGFloat barTotalWidth = 196.0;
CGFloat barHeight = 64;
CGFloat barGutter = 8;
CGFloat barWidth = (barTotalWidth - (barSegments-1)*barGutter) / barSegments;
for (int segmentNumber = 0; segmentNumber < barSegments; ++segmentNumber) {
// get the box rect
CGRect segment = CGRectMake(barStartX + ((barWidth + barGutter) * segmentNumber), barStartY , barWidth, barHeight);
// plot box
CGContextFillRect(context, segment);
}
Given n boxes of width w and spacing s, the total length will be:
l = n × w + (n-1) × s
You know all the variables. The same formula can be used to place an arbitrary box. From your diagram, we can see that the length is the same as the right-edge coordinate of the final box. So you can just use n from 1 through whatever to find the right-edge of all your boxes. Finding left edge from that is trivial: subtract w, add 1 (as a box from 80 to 80 is 1px wide).
Note that n counts from 1, not 0. You can change the formula, of course, to count from 0.
I would do something like this:
CGFloat barStartX = 96;
CGFloat barStartY = 64.0;
CGFloat barWidth = 128;
CGFloat barHeight = 64;
...
int numSegments = 8; // number of segments
CGFloat spacing = 8; // spacing between segments
CGFloat segmentWidth = (barWidth - spacing*(numSegments - 1)) / numSegments;
CGRect segmentRect = CGRectMake(barStartX, barStartY, segmentWidth, barHeight);
for (int segmentNumber = 0; segmentNumber < numSegments; segmentNumber++)
{
segmentRect.origin.x = segmentNumber*(segmentWidth + spacing);
CGContextFillRect(context, segmentRect);
}
I like to define the rectangle outside of the loop, and then, in the loop, update only the properties of the rectangle that are actually changing. In this case, it is only the x coordinate that changes every time, so the loop becomes pretty simple.
Draw blue and pink rectangles separately and draw one last blue rectangle :)
(or don't draw one last pink one)