Designing PDF with HTML in WPF application - wpf

Is it possible to design a pdf using HTML and js in a WPF application(C#). I tried using iTextSharp by taking a image of the control i want to export and then pasted in the pdf. It worked, but the problem is the control has to be visible. So, lets say if there is a listview in that control and that listview doesn't fits in one window, then whole listview doesn't gets exported. Any ideas how to approach?
This is so far I've done for exporting ListView to pdf.
ListView item = list;
double width = item.ActualWidth;
double height = item.ActualHeight;
Document doc = new Document(new iTextSharp.text.Rectangle(1200f, 700f));
String filePath;
string path = Environment.CurrentDirectory + "\\export\\" ;
if (!Directory.Exists(path))
{
Directory.CreateDirectory(path);
}
filePath = path + "\\Test_.pdf";
PdfWriter.GetInstance(doc, new FileStream(filePath, FileMode.Append));
doc.Open();
RenderTargetBitmap bmpCopied = new RenderTargetBitmap((int)Math.Round(width + 100), (int)Math.Round(height + 50), 0, 0, PixelFormats.Default);
DrawingVisual drawingVisual = new DrawingVisual();
using (DrawingContext drawingContext = drawingVisual.RenderOpen())
{
item.Background = Brushes.White;
VisualBrush visualBrush = new VisualBrush(item);
drawingContext.DrawRectangle(Brushes.White, null, new Rect(new Point(), new Size(width + 100, height + 50)));
drawingContext.DrawRectangle(visualBrush, null, new Rect(new Point(), new Size(width + 100, height + 50)));
}
bmpCopied.Render(drawingVisual);
item.Background = Brushes.Transparent;
byte[] data;
JpegBitmapEncoder encoder = new JpegBitmapEncoder();
encoder.Frames.Add(BitmapFrame.Create(bmpCopied));
using (MemoryStream ms = new MemoryStream())
{
encoder.Save(ms);
data = ms.ToArray();
}
iTextSharp.text.Image pdfImage = iTextSharp.text.Image.GetInstance(data);
doc.Add(pdfImage);
doc.Close();
The pdf gets created, but opening the pdf shows error. Secondly, my ListView is scrollable, will all the items be shown in the pdf?. I just need to get it working. Any way would be ok for me

Related

Creating bitmap in memory using RenderTargetBitmap fails in production WCF Service

I am using WPF objects to generate an bitmap image in memory. The program that does this resides in a WCF web service. The image renders correctly when I run on locally on IISExpress, and on a test IIS 7 server. However, when running on a server used by QA, the image is not rendered correctly. More specifically, only the top 22px lines of a 250px height image are rendered. The settings on both the test server and the QA server are supposed to be identical (insert skeptical face here).
Question: What possible settings in IIS could be effecting this image rendering? Also, I'm thinking there could possibly be a threading issue since RenderTargetBitmap renders asynchronously, and I do get a partial image.
Here is the code I'm using:
private byte[] RenderGauge(ViewData viewData)
{
double resolution = 4 * ReSize;
double dpi = 96 * resolution;
var view = new Gauge();
var vm = new GuageViewModel(viewData);
view.Measure(new Size(350, 70));
view.Arrange(new Rect(new Size(350, 70)));
var bounds = VisualTreeHelper.GetDescendantBounds(view);
if (bounds != Rect.Empty)
{
height = (int)(Math.Floor(bounds.Height) + 1);
width = (int)(Math.Floor(bounds.Width) + 1);
size = new Size(width, height);
}
var bitmap = new RenderTargetBitmap((int)(width * resolution), (int)(height * resolution), dpi, dpi, PixelFormats.Pbgra32);
var visual = new DrawingVisual();
using (var context = visual.RenderOpen())
{
var brush = new VisualBrush(view);
context.DrawRectangle(brush, null, new Rect(new Point(), bounds.Size));
}
bitmap.Render(visual);
var encoder = new PngBitmapEncoder();
encoder.Frames.Add(BitmapFrame.Create(bitmap));
byte[] img;
using (var MS = new MemoryStream())
{
encoder.Save(MS);
img = MS.ToArray();
}
img = img == null ? new byte[0] : img;
return img;
}
So, I'm doing exactly the same thing and I had a number of issues rendering files. I've found that using a binding to a bitmap in the XAML helps. The code from my view model that returns the image source is:
public Uri ImageUri
{
get { return new Uri(ImagePath, UriKind.Absolute); }
}
public BitmapImage ImageSource
{
get
{
try
{
if (string.IsNullOrEmpty(ImagePath) || !File.Exists(ImagePath))
return null;
var image = new BitmapImage();
image.BeginInit();
image.CacheOption = BitmapCacheOption.OnLoad;
image.UriSource = ImageUri;
image.EndInit();
return image;
}
catch (Exception e)
{
var logger = LogManager.GetLogger(typeof(ImageDetails));
ExceptionHelper.LogExceptionMessage(logger, e);
}
return null;
}
}
Then in the XAML I bind to the ImageSource property.
I think that most problems with RenderTargetBitmap are related to asynchronous bindings in the XAML becauses the render method is synchronous.

Programmatically setting the icon size

On WPF I am creating a menu item dynamically at run time.
I set the icon from a StreamGeometry that's stored on a ResourceDictionary. Everything works OK but: how do I set the size of the icon?
MenuItem menExit = new MenuItem();
menExit.Header = "Exit"; // will be changedlater
menExit.Command = UICommands.CmdExit;
menExit.CommandBindings.Add(new CommandBinding(UICommands.CmdExit, CmdExitExecute, CmdExitCanExecute));
menExit.Icon = (StreamGeometry)FindResource("ImgExit");
//SET THE SIZE HERE????????
// Eventually, how do I set the fill color?
menu.Items.Add(menExit);
Note, I am doing all this at run time and not in xalm
I suggest you to create a path on which you can specify Height, Width and Fill and set your StreamGeometry as the Data of the path. Then put this Path as the icon of the MenuItem.
var path = new Path
{
Height = 20,
Width = 20,
Fill = new SolidColorBrush(Colors.Blue),
Data = (StreamGeometry) FindResource("ImgExit")
};
menExit.Icon = path;
You can always try this:
/// <summary>
/// Convert Geometry to ImageSource, Draws the Geometry on a bitmap surface and centers it.
/// </summary>
/// <param name="geometry"></param>
/// <param name="TargetSize"></param>
/// <returns></returns>
ImageSource Geometry_To_ImageSource(Geometry geometry, int TargetSize)
{
var rect = geometry.GetRenderBounds(new Pen(Brushes.Black, 0));
var bigger = rect.Width > rect.Height ? rect.Width : rect.Height;
var scale = TargetSize / bigger;
Geometry scaledGeometry = Geometry.Combine(geometry, geometry, GeometryCombineMode.Intersect, new ScaleTransform(scale, scale));
rect = scaledGeometry.GetRenderBounds(new Pen(Brushes.Black, 0));
Geometry transformedGeometry = Geometry.Combine(scaledGeometry, scaledGeometry, GeometryCombineMode.Intersect, new TranslateTransform(((TargetSize - rect.Width) / 2) - rect.Left, ((TargetSize - rect.Height) / 2) - rect.Top));
RenderTargetBitmap bmp = new RenderTargetBitmap(TargetSize, TargetSize, 96, 96, PixelFormats.Pbgra32);
DrawingVisual viz = new DrawingVisual();
using (DrawingContext dc = viz.RenderOpen())
{
dc.DrawGeometry(Brushes.Black, null, transformedGeometry);
}
bmp.Render(viz);
var mem = new MemoryStream();
PngBitmapEncoder pngEncoder = new PngBitmapEncoder();
pngEncoder.Frames.Add(BitmapFrame.Create(bmp));
pngEncoder.Save(mem);
var itm = GetImg(mem);
return itm;
}
BitmapImage GetImg(MemoryStream ms)
{
var bmp = new BitmapImage();
bmp.BeginInit();
bmp.StreamSource = ms;
bmp.EndInit();
return bmp;
}

Snapshot of an WPF Canvas Area using RenderTargetBitmap

I want to create a Snapshot of the Canvas Area in my Application. I'm using Visual brush to get the Snapshot and saving the same using PngEncoder. But the resulting PNG is just a empty black image. I'm not sure the issue is with the BitmapSource created or the PNGEncoder issue. Here is the code I'm using to obtain the same.
public void ConvertToBitmapSource(UIElement element)
{
var target = new RenderTargetBitmap((int)(element.RenderSize.Width), (int)(element.RenderSize.Height), 96, 96, PixelFormats.Pbgra32);
var brush = new VisualBrush(element);
var visual = new DrawingVisual();
var drawingContext = visual.RenderOpen();
drawingContext.DrawRectangle(brush, null, new Rect(new Point(0, 0),
new Point(element.RenderSize.Width, element.RenderSize.Height)));
drawingContext.Close();
target.Render(visual);
PngBitmapEncoder encoder = new PngBitmapEncoder();
BitmapFrame outputFrame = BitmapFrame.Create(target);
encoder.Frames.Add(outputFrame);
using (FileStream file = File.OpenWrite("TestImage.png"))
{
encoder.Save(file);
}
}
Not sure why exactly your code isn't working. This works:
public void WriteToPng(UIElement element, string filename)
{
var rect = new Rect(element.RenderSize);
var visual = new DrawingVisual();
using (var dc = visual.RenderOpen())
{
dc.DrawRectangle(new VisualBrush(element), null, rect);
}
var bitmap = new RenderTargetBitmap(
(int)rect.Width, (int)rect.Height, 96, 96, PixelFormats.Default);
bitmap.Render(visual);
var encoder = new PngBitmapEncoder();
encoder.Frames.Add(BitmapFrame.Create(bitmap));
using (var file = File.OpenWrite(filename))
{
encoder.Save(file);
}
}
Thank you both for the question and the answer.
For the benefit of the others looking for the same answer
I found that Clemens way leaves a black band in the image with the image shifted either down or right. As if it was not rendering the element at the correct position in the bitmap.
So I had to use the VisualBrush as Amar suggested.
Here is the code that worked for me:
RenderTargetBitmap RenderVisual(UIElement elt)
{
PresentationSource source = PresentationSource.FromVisual(elt);
RenderTargetBitmap rtb = new RenderTargetBitmap((int)elt.RenderSize.Width,
(int)elt.RenderSize.Height, 96, 96, PixelFormats.Default);
VisualBrush sourceBrush = new VisualBrush(elt);
DrawingVisual drawingVisual = new DrawingVisual();
DrawingContext drawingContext = drawingVisual.RenderOpen();
using (drawingContext)
{
drawingContext.DrawRectangle(sourceBrush, null, new Rect(new Point(0, 0),
new Point(elt.RenderSize.Width, elt.RenderSize.Height)));
}
rtb.Render(drawingVisual);
return rtb;
}

WPF issue with VisualBrush for Image while moving

I have Image and i have created visual brush for image when i move object form one point to another. but I don't see image on visual brush. if you see my rectangle, it suppose to show image.
See image : http://social.msdn.microsoft.com/Forums/en-US/wpf/thread/e8833983-3d73-45e1-8af1-3bc27846441d
here is code:
internal static VisualBrush GetVisualBrushByObject(LabelObject obj, Rect objectRect, int quality, FlowDirection flowdirection)
{
DrawingVisual drawingVisual = new DrawingVisual();
DrawingContext drawingContext = drawingVisual.RenderOpen();
Rect objectbounds = new Rect(0, 0, objectRect.Width, objectRect.Height);
if (obj is TextObject)
{
TextObject txtObj = obj.Clone() as TextObject;
DymoTextBlock txtBlock = txtObj.DymoTextBlock as DymoTextBlock;
objectbounds = new Rect(txtBlock.GetNaturalSize(txtBlock.GetFormattedText(flowdirection)));
}
if (obj is ImageObject)
{
drawingContext.DrawImage(((ImageObject)obj).Image, objectbounds);
}
LabelObject.RenderParams lrp = new LabelObject.RenderParams(drawingContext, new Common.Resolution(96, 96), false, objectbounds, flowdirection);
obj.Render(lrp);
VisualBrush vBrush = new VisualBrush();
vBrush.TileMode = TileMode.None;
vBrush.Stretch = Stretch.Fill;
if (obj is ImageObject)
{
vBrush.Opacity = 0.4;
}
drawingContext.Close();
vBrush.Visual = drawingVisual;
return vBrush;
}
pls help me
Thanks
If you are moving your image with Transforms(TranslateTransform), then you have to undo it for the visual brush phase.

Add TransformGroup to a FramworkElement when rendering WPF to a PNG

I've got an app that turns some XAML Usercontrols into PNGs - this has worked really well up to now, unfortunately I now need to double the size of the images.
My method (that doesn't work!) was to add a ScaleTransform to the visual element after I've loaded it ...
This line is the new line at the top of the SaveUsingEncoder method.
visual.RenderTransform = GetScaleTransform(2);
The PNG is the new size (3000 x 2000) - but the XAML is Rendered at 1500x1000 in the centre of the image.
Can anyone assist please?
private void Load(string filename)
{
var stream = new FileStream(filename), FileMode.Open);
var frameworkElement = (FrameworkElement)(XamlReader.Load(stream));
var scale = 2;
var encoder = new PngBitmapEncoder();
var availableSize = new Size(1500 * scale, 1000 * scale);
frameworkElement.Measure(availableSize);
frameworkElement.Arrange(new Rect(availableSize));
name = name.Replace(" ", "-");
SaveUsingEncoder(frameworkElement, string.Format(#"{0}.png", name), encoder, availableSize);
}
private TransformGroup GetScaleTransform(int scale)
{
var myScaleTransform = new ScaleTransform {ScaleY = scale, ScaleX = scale};
var myTransformGroup = new TransformGroup();
myTransformGroup.Children.Add(myScaleTransform);
return myTransformGroup;
}
private void SaveUsingEncoder(FrameworkElement visual, string fileName, BitmapEncoder encoder, Size size)
{
visual.RenderTransform = GetScaleTransform(2);
var bitmap = new RenderTargetBitmap(
(int) size.Width,
(int) size.Height,
96,
96,
PixelFormats.Pbgra32);
bitmap.Render(visual);
var frame = BitmapFrame.Create(bitmap);
encoder.Frames.Add(frame);
using (var stream = File.Create(fileName))
{
encoder.Save(stream);
}
}
Called visual.UpdateLayout before rendering into the RenderTargetBitmap
(Thanks to Clemens for this answer - but he put it as a comment!)

Resources