I capture an image using the photo or camera task and I want to resize the image to say example 480x240 from the captured size of around 2592x1944.
how do I do this ?
thanks
You can pass the JPEG stream you get from the Completed event to the PictureDecoder.DecodeJpeg method. The second and the third parameters define the size. You will get a WriteableBitmap that can be manipulated further and saved back to the MediaLibrary. See this blog post for some example.
Related
I am using Microblink Sdk with reactjs. So, when I open the camera to take the photo of some document it takes the picture of whole frame not just inside the reactange.
Is there any option to take only the picture only inside the rectangle.
Thanks!!!
The rectangle displayed on the screen is there to help the user position the document they are scanning.
Specifically, the detection of the document and data extraction works best if the entire image is sent to processing, and if the document is positioned roughly around the rectangle.
That means that if you “cropped” the image to the rectangle before recognition, you would effectively lower the success rate of scanning or you would have to move the camera further away from the document to keep it in that sweet spot where the document has a margin around it, resulting in a lower resolution image, lowering the success rate once more and defeating the purpose of the initial cropping.
I stumbled upon a problem during my work with codename one and the parse4cn1 plugin.
I try to upload an image which I took with the capture module of codename one. According to the documentation of parse4cn1 I have to convert the image into “Bytes” with the “getBytes” function. But according to the Codename one documentation getBytes only works with Strings and not with images.
Do you know how to “convert” the image appropriately?
I have been using this documentation, the section "uploading files":
https://github.com/sidiabale/parse4cn1/wiki/Usage-Examples#uploading-files
getBytes() is a method of EncodedImage not image. An encoded image can map to a PNG or JPEG and is a subclass of Image. You can use EncodedImage.create*() methods to load an EncodedImage directly or convert an existing image to an EncodedImage:
EncodedImage e = EncodedImage.createFromImage(img, false);
The second argument indicates if this should become a PNG or JPEG. If the image includes transparent/translucent pixels use PNG. If the image is a photo use JPEG.
I am doing an photo editor apllication..
i came to know that we can create jpeg using canvas(any UIElement)
i am taking a proxy(smaller size) image while in editing mode.
i am not getting how to save the original image by replacing proxy without rendering the original image on screen.
Thanks and regards
Before saving change the canvas size to original width height
say canvas.updatelayout()
convert UIElement to writeable bitmap
Scale down the image to previous width and height
I'm developing an application that shall receive images from a camera device and display them in a GTK window.
The camera delivers raw RGB images (3 bytes per pixel, no alpha channel, fixed size) at a varying frame rate (1-50 fps).
I've already done all that hardware stuff and now have a callback function that gets called with every new image captured by the camera.
What is the easyest but fast enough way to display those images in my window?
Here's what I already tried:
using gdk_draw_rgb_image() on a gtk drawing area: basically worked, but rendered so slow that the drawing processes overlapped and the application crashed after the first few frames, even at 1 fps capture rate.
allocating a GdkPixbuf for each new frame and calling gtk_image_set_from_pixbuf() on a gtk image widget: only displays the first frame, then I see no change in the window. May be a bug in my code, but don't know if that will be fast enough.
using Cairo (cairo_set_source_surface(), then cairo_paint()): seemed pretty fast, but the image looked striped, don't know if the image format is compatible.
Currently I'm thinking about trying something like gstreamer and treating those images like a video stream, but I'm not sure whether this is like an overkill for my simple mechanism.
Thanks in advance for any advice!
The entire GdkRGB API seems to be deprecated, so that's probably not the recommended way to solve this.
The same goes for the call to render a pixbuf. The documentation there points at Cairo, so the solution seems to be to continue investigating why your image looked incorrect when rendered by Cairo.
unwind is right, cairo is the way to go if you want something that will work in GTK2 and GTK3. As your samples are RGB without alpha, you should use the CAIRO_FORMAT_RGB24 format. Make sure the surface you paint is in that format. Also try to make sure that you're not constantly allocating/destroying the surface buffer if the input image keeps the same size.
I want to convert a xaml canvas to a png image using c#. I used RenderTargetBitmap as described in the second post here. It works quite well if the xaml that's meant to be converted is displayed in a window or a page and you can actually see it on screen. But if the window is closed or hidden or the canvas isn't a child of a window / page / frame, a blank image will be generated. Does anyone know why this happens or how to make it work?
I can't be sure but it may be that WPF is saving time by not rendering anything that isn't currently on screen, therefore when you grab the bitmap from the render target for that object, it hasn't been rendered and so it is blank.
I would suggest putting it on screen for the duration of your capture and then remove it. If the object is small it may even appear and disappear in no more than a flicker.