WPF JpegEncoder/Decoder preserve DPI precision during disk read/write - wpf

I am encountering an issue when reading and writing a BitmapSource to disk using a JpegEncoder/Decoder. The following code sample illustrates the problem:
//initialize with some dummy test data
int outputHeight = 100;
int outputWidth = 100;
int outputStride = 100 * 3; //24 bpp
byte[] outputBytes = new byte[100 * outputStride];
double dpiX = 20.5;
double dpiY = 20.5;
//generate image
BitmapSource testOutput = BitmapImage.Create(outputWidth, outputHeight,
dpiX, dpiY, PixelFormats.Bgr24, null,
outputBytes, outputStride);
Trace.Assert(testOutput.DpiX == dpiX); //succeeds
Trace.Assert(testOutput.DpiY == dpiY); //succeeds
//write to disk
JpegBitmapEncoder encoder = new JpegBitmapEncoder();
using (FileStream fileStream = new FileStream(#"F:\Users\Caleb\Desktop\test.jpg", FileMode.Create))
{
encoder.Frames.Add(BitmapFrame.Create(testOutput));
encoder.QualityLevel = 100;
Trace.Assert(encoder.Frames[0].DpiX == dpiX); //succeeds
Trace.Assert(encoder.Frames[0].DpiY == dpiY); //succeeds
encoder.Save(fileStream);
}
//read back
using (Stream imageStreamSource = new FileStream(#"F:\Users\Caleb\Desktop\test.jpg", FileMode.Open, FileAccess.Read, FileShare.Read))
{
JpegBitmapDecoder decoder = new JpegBitmapDecoder(imageStreamSource, BitmapCreateOptions.PreservePixelFormat, BitmapCacheOption.Default);
BitmapSource reread = decoder.Frames[0];
Trace.Assert(reread.DpiX == dpiX); //fails; reread.DpiX is 21.0
Trace.Assert(reread.DpiY == dpiY); //fails; reread.DpiY is 21.0
}
As indicated in the comments, the DPI that is read back is not equal to the input value. It seems that either during the encoding or decoding process, the DPI attributes are being rounded to the nearest pixel.
Is there a way to retain the number to the right of the decimal point in the DPI attributes of the image that is read back from disk?

Even though the API allows you to pass in a double (and hence a value like 20.5), DPI values are integers. Therefore the API is rounding (up in your case) to the nearest integer. Do you expect any difference between a DPI of 20.5 and 21.0?

Well, not sure what's going on with the JpegBitmapEncoder/Decoder chain, but using the BmpBitmapEncoder/Decoder instead yields results that are accurate to the nearest 10th of a pixel. Not ideal, but it will probably work for the my application.

Related

How do I set a WPF Image's Source to a bytearray in C# code behind?

I'm building a small app using C#/WPF.
The application receives (from an unmanaged C++ library) a byte array (byte[]) from a bitmap source
In my WPF window, I have an (System.windows.Controls.Image) image which I will use to display the bitmap.
In the code behind (C#) I need to able to take that byte array, create BitmapSource /ImageSource and assign the source for my image control.
// byte array source from unmanaged librariy
byte[] imageData;
// Image Control Definition
System.Windows.Controls.Image image = new Image() {width = 100, height = 100 };
// Assign the Image Source
image.Source = ConvertByteArrayToImageSource(imageData);
private BitmapSource ConvertByteArrayToImagesource(byte[] imageData)
{
??????????
}
I've been working on this for a bit here and haven't been able to figure this out. I've tried several solutions that I've found by goolging around. To date, I haven't been able to figure this out.
I've tried:
1) Creating a BitmapSource
var stride = ((width * PixelFormats.Bgr24 +31) ?32) *4);
var imageSrc = BitmapSource.Create(width, height, 96d, 96d, PixelFormats.Bgr24, null, imageData, stride);
That through a runtime exception saying buffer was too small
Buffer size is not sufficient
2) I tried using a memory stream:
BitmapImage bitmapImage = new BitmapImage();
using (var mem = new MemoryStream(imageData))
{
bitmapImage.BeginInit();
bitmapImage.CrateOptions = BitmapCreateOptions.PreservePixelFormat;
bitmapImage.CacheOption = BitmapCacheOption.OnLoad;
bitmapImage.StreamSource = mem;
bitmapImage.EndInit();
return bitmapImage;
}
This code through an exception on the EndInit() call.
"No imaging component suitableto complete this operation was found."
SOS! I've spent a couple of days on this one and am clearly stuck.
Any help/ideas/direction would be greatly appreciated.
Thanks,
JohnB
Your stride calculation is wrong. It is the number of full bytes per scan line, and should therefore be calculated like this:
var format = PixelFormats.Bgr24;
var stride = (width * format.BitsPerPixel + 7) / 8;
var imageSrc = BitmapSource.Create(
width, height, 96d, 96d, format, null, imageData, stride);
Of course you also have to make sure that you use the correct image size, i.e. that the width and height values actually correspond with the data in imageBuffer.

BitmapImage from BitmapSource always Bgr32

I'm loading a BitmapImage from a BitmapSource and i always find the BitmapImage's format is Bgr32 instead of Bgra32 which the BitmapSource is. There is a similar thread on this here:
BitmapImage from file PixelFormat is always bgr32
but using BitmapCreateOptions.PreservePixelFormat didn't work for me as was suggested in the thread. here's what i'm doing:
// I have verified that b is a valid BitmapSource and its format is Bgra32
// the following code produces a file (testbmp2.bmp) with an alpha channel as expected
// placing a breakpoint here and querying b.Format in the Immediate window also produces
// a format of Bgra32
BmpBitmapEncoder test = new BmpBitmapEncoder();
FileStream stest = new FileStream(#"c:\temp\testbmp2.bmp", FileMode.Create);
test.Frames.Add(BitmapFrame.Create(b));
test.Save(stest);
stest.Close();
// however, this following snippet results in bmp.Format being Bgr32
BitmapImage bmp = new BitmapImage();
BmpBitmapEncoder encoder = new BmpBitmapEncoder();
MemoryStream stream = new MemoryStream();
encoder.Frames.Add(BitmapFrame.Create(b));
encoder.Save(stream);
bmp.BeginInit();
bmp.StreamSource = new MemoryStream(stream.ToArray());
bmp.CreateOptions = BitmapCreateOptions.PreservePixelFormat;
bmp.CacheOption = BitmapCacheOption.None;
bmp.EndInit();
stream.Close();
Is there a way to create a BitmapImage from the BitmapSource and preserver the alpha channel?
Update:
I used the same code posted in the the other thread to load testbmp2.bmp from file and after loading, the Format property is Bgr32.
BitmapImage b1 = new BitmapImage();
b1.BeginInit();
b1.UriSource = new Uri(#"c:\temp\testbmp2.bmp");
b1.CreateOptions = BitmapCreateOptions.PreservePixelFormat | BitmapCreateOptions.IgnoreImageCache;
b1.CacheOption = BitmapCacheOption.OnLoad;
b1.EndInit();
so maybe i'm not saving the FileStream/MemoryStream correctly? That doesn't seem right since after saving the FileStream, I can open testbmp2.bmp in Photoshop and see the alpha channel.
Update 2:
I think I need to re-word the issue I'm having. I'm trying to display separate channels of a texture. I am displaying the bitmap via an Image control. I have a simple HLSL pre compiled shader assigned to the Image's Effect property that will mask out the channels based user input. Not being able to get the BitmapSource into a BitmapImage while retaining the alpha channel was only part of the problem. I realize now that since my original BitmapSource's Format was Bgra32 I could assign that directly to the Image's Source property. The problem appears to be that an Image object will only display texels with pre-multiplied alpha...? Here's my shader code:
sampler2D inputImage : register(s0);
float4 channelMasks : register(c0);
float4 main (float2 uv : TEXCOORD) : COLOR0
{
float4 outCol = tex2D(inputImage, uv);
if (!any(channelMasks.rgb - float3(1, 0, 0)))
{
outCol.rgb = float3(outCol.r, outCol.r, outCol.r);
}
else if (!any(channelMasks.rgb - float3(0, 1, 0)))
{
outCol.rgb = float3(outCol.g, outCol.g, outCol.g);
}
else if (!any(channelMasks.rgb - float3(0, 0, 1)))
{
outCol.rgb = float3(outCol.b, outCol.b, outCol.b);
}
else
{
outCol *= channelMasks;
}
if (channelMasks.a == 1.0)
{
outCol.r = outCol.a; // * 0.5 + 0.5;
outCol.g = outCol.a;
outCol.b = outCol.a;
}
outCol.a = 1;
return outCol;
}
I've tested it pretty extensively and I'm pretty sure that its setting the outCol value properly. When i pass in a channel mask value of (1.0, 1.0, 1.0, 0.0) the image displayed is the RGB channels with the areas of the alpha that are black drawing as black - as you would expect if WPF was pre-multiplying the alpha on to the RGB channels. Does anyone know of any way to dispaly a BitmapSource in an Image without pre multiplied alpha? Or more to the point, is there a way to have the effect receive the texture before the alpha pre-multiply happens? I'm not sure what order things are done in, but apparently the pre-multiply happens before the texture is written to the s0 register and the shader gets to work on it. I could try to do this with WriteableBitmap and copying the alpha out to another BitmapSource but that would all be using software rendering i think...?
I ran into a similar problem lately. The BmpBitmapEncoder does not care about the alpha channel and therefore does not preserve it. An easy fix is to use a PngBitmapEncoder instead.

How can I convert byte[] to BitmapImage?

I have a byte[] that represents the raw data of an image. I would like to convert it to a BitmapImage.
I tried several examples I found but I kept getting the following exception
"No imaging component suitable to complete this operation was found."
I think it is because my byte[] does not actually represent an Image but only the raw bits.
so my question is as mentioned above is how to convert a byte[] of raw bits to a BitmapImage.
The code below does not create a BitmapSource from a raw pixel buffer, as asked in the question.
But in case you want to create a BitmapImage from an encoded frame like a PNG or a JPEG, you would do it like this:
public static BitmapImage LoadFromBytes(byte[] bytes)
{
using (var stream = new MemoryStream(bytes))
{
var image = new BitmapImage();
image.BeginInit();
image.CacheOption = BitmapCacheOption.OnLoad;
image.StreamSource = stream;
image.EndInit();
return image;
}
}
When your byte array contains a bitmap's raw pixel data, you may create a BitmapSource (which is the base class of BitmapImage) by the static method BitmapSource.Create.
However, you need to specify a few parameters of the bitmap. You must know in advance the width and height and also the PixelFormat of the buffer.
byte[] buffer = ...;
var width = 100; // for example
var height = 100; // for example
var dpiX = 96d;
var dpiY = 96d;
var pixelFormat = PixelFormats.Pbgra32; // for example
var stride = (width * pixelFormat.BitsPerPixel + 7) / 8;
var bitmap = BitmapSource.Create(width, height, dpiX, dpiY,
pixelFormat, null, buffer, stride);
I ran across this same error, but it was because my array was not getting filled with the actual data. I had an array of bytes that was equal to the length it was supposed to be, but the values were all still 0 - they had not been written to!
My particular issue - and I suspect for others that arrive at this question, as well - was because of the OracleBlob parameter. I didn't think I needed it, and thought I could just do something like:
DataSet ds = new DataSet();
OracleCommand cmd = new OracleCommand(strQuery, conn);
OracleDataAdapter oraAdpt = new OracleDataAdapter(cmd);
oraAdpt.Fill(ds)
if (ds.Tables[0].Rows.Count > 0)
{
byte[] myArray = (bytes)ds.Tables[0]["MY_BLOB_COLUMN"];
}
How wrong I was! To actually get the real bytes in that blob, I needed to actually read that result into an OracleBlob object. Instead of filling a dataset/datatable, I did this:
OracleBlob oBlob = null;
byte[] myArray = null;
OracleCommand cmd = new OracleCommand(strQuery, conn);
OracleDataReader result = cmd.ExecuteReader();
result.Read();
if (result.HasRows)
{
oBlob = result.GetOracleBlob(0);
myArray = new byte[oBlob.Length];
oBlob.Read(array, 0, Convert.ToInt32(myArray.Length));
oBlob.Erase();
oBlob.Close();
oBlob.Dispose();
}
Then, I could take myArray and do this:
if (myArray != null)
{
if (myArray.Length > 0)
{
MyImage.Source = LoadBitmapFromBytes(myArray);
}
}
And my revised LoadBitmapFromBytes function from the other answer:
public static BitmapImage LoadBitmapFromBytes(byte[] bytes)
{
var image = new BitmapImage();
using (var stream = new MemoryStream(bytes))
{
stream.Seek(0, SeekOrigin.Begin);
image.BeginInit();
image.StreamSource = stream;
image.CreateOptions = BitmapCreateOptions.PreservePixelFormat;
image.CacheOption = BitmapCacheOption.OnLoad;
image.UriSource = null;
image.EndInit();
}
return image;
}
Create a MemoryStream from the raw bytes and pass that into your Bitmap constructor.
Like this:
MemoryStream stream = new MemoryStream(bytes);
Bitmap image = new Bitmap(stream);

Cannot decode jpeg using JpegBitmapDecoder

I have the following two functions to convert bytes to image and display on Image in WPF
private JpegBitmapDecoder ConvertBytestoImageStream(byte[] imageData)
{
Stream imageStreamSource = new MemoryStream(imageData);
JpegBitmapDecoder decoder = new JpegBitmapDecoder(imageStreamSource, BitmapCreateOptions.PreservePixelFormat, BitmapCacheOption.Default);
BitmapSource bitmapSource = decoder.Frames[0];
return decoder;
}
The above code does not work at all. I always get the exception that "No imaging component found" Image is not displayed.
private MemoryStream ConvertBytestoImageStream(int CameraId, byte[] ImageData, int imgWidth, int imgHeight, DateTime detectTime)
{
GCHandle gch = GCHandle.Alloc(ImageData, GCHandleType.Pinned);
int stride = 4 * ((24 * imgWidth + 31) / 32);
Bitmap bmp = new Bitmap(imgWidth, imgHeight, stride, PixelFormat.Format24bppRgb, gch.AddrOfPinnedObject());
MemoryStream ms = new MemoryStream();
bmp.Save(ms, ImageFormat.Jpeg);
gch.Free();
return ms;
}
This function works, but is very slow. I wish to optimize my code.
Your ConvertBytestoImageStream works fine for me if i pass it a JPEG buffer. There are however a few things that could be improved. Depending on whether you really want to return a decoder or a bitmap, the method could be written this way:
private BitmapDecoder ConvertBytesToDecoder(byte[] buffer)
{
using (MemoryStream stream = new MemoryStream(buffer))
{
return BitmapDecoder.Create(stream,
BitmapCreateOptions.PreservePixelFormat,
BitmapCacheOption.OnLoad); // enables closing the stream immediately
}
}
or this way:
private ImageSource ConvertBytesToImage(byte[] buffer)
{
using (MemoryStream stream = new MemoryStream(buffer))
{
BitmapDecoder decoder = BitmapDecoder.Create(stream,
BitmapCreateOptions.PreservePixelFormat,
BitmapCacheOption.OnLoad); // enables closing the stream immediately
return decoder.Frames[0];
}
}
Note that instead of using JpegBitmapDecoder this code utilizes a static factory method of the abstract base class BitmapDecoder which automatically selects the proper decoder for the provided data stream. Hence this code can be used for all image formats supported by WPF.
Note also that the Stream object is used inside a using block which takes care of disposing it when it is no longer needed. BitmapCacheOption.OnLoad ensures that the whole stream is loaded into the decoder and can be closed afterwards.

Silverlight 4.0: How to determine the file size of an object in MemoryStream

byte[] imageBytes = Convert.FromBase64String(base64String);
MemoryStream ms = new MemoryStream(imageBytes, 0,
imageBytes.Length);
How will I determine its file size of an image?
You should be able to determine the size of the stream pretty easy.
MemoryStream ms = new MemoryStream();
int length = ms.Length;
length is now the length of the stream in bytes. This bytenumber should also be the size of any file you would store that contained only this stream.
Edit:
If you mean in pixels you could use something like:
Image img = Image.FromStream(ms);
int width = img.Width;
int height = img.Height;

Resources