I'm creating a simple graphic editor by C/glut. So I just wonder how can I save my draws as PNG, BMP etc. I tried png.h but didn't work for me. I didn't get any errors but it did not save anything.
Any advice?
You can easily save bmp images from your app using libsoil.
http://lonesock.net/soil.html
For example, I used:
int save_result = SOIL_save_screenshot(
filename,
SOIL_SAVE_TYPE_BMP,
0,0,
width, height
);
If you are a debian user, there is a package named libsoil-dev. It should be available in Ubuntu too.
Related
i need your help very much indeed.
I am new in codenameone and i am trying to implement a basic example like, drawing an image with a graphic. i only find some incomplete example and basic idea about drawing an image and i can not do what i want. I need an example from the scratch
When i run the compiler the image is not drawn. I do not know if i have to do something before with the img variable, if so would u please right the code
//this the class i call painel
static Image img;
public void paint(Graphcs g)
{
try
{
img.createImage("fundo.jpg");
g.drawImage(img, 10, 10);
}
cathc(IOException ex)
{
}
}
It seems catch is misspelled so I'm guessing that the code isn't copied and pasted.
You are opening an image incorrectly (wrong path and ideally you should have gotten it from the resource file) which should throw an exception. Doing it within the paint loop would be really slow to boot.
I suggest going over the developer guide in the website and one or more of the tutorials.
Given an image (i.e. newspaper, scanned newspaper, magazine etc), how do I detect the region containing text? I only need to know the region and remove it, don't need to do text recognition.
The purpose is I want to remove these text areas so that it will speed up my feature extraction procedure as these text areas are meaningless for my application. Anyone know how to do this?
BTW, it will be good if this can be done in Matlab!
Best!
You can use Stroke Width Transform (SWT) to highlight text regions.
Using my mex implementation posted here, you can
img = imread('http://i.stack.imgur.com/Eyepc.jpg');
[swt swtcc] = SWT( img, 0, 10 );
Playing with internal parameters of the edge-map extraction and image filtering in SWT.m can help you tweak the resulting mask to your needs.
To get this result:
I used these parameters for the edge map computation in SWT.m:
edgeMap = single( edge( img, 'canny', [0.05 0.25] ) );
Text detection in natural images is an active area of research in computer vision community. U can refer to ICDAR papers. But in your case I think it should be simple enough. As you have text from newspaper or magazines, it should be of fixed size and horizontally oriented.
So, you can apply scanning window of a fixed size, say 32x32. Train it on ICDAR 2003 training dataset for positive windows having text in it. U can use a small feature set of color and gradients and train an SVM which would give a positive or negative result for a window having text or not.
For reference go to http://crypto.stanford.edu/~dwu4/ICDAR2011.pdf . For code, you can try their homepages
This example in the Computer Vision System Toolbox in Matlab shows how to detect text using MSER regions.
If your image is well binarized and you know the usual size of the text you could use the HorizontalRunLengthSmoothing and VerticalRunLengthSmoothing algorithms. They are implemented in the open source library Aforge.Net but it should be easy to reimplement them in Matlab.
The intersection of the result image from these algorithm will give you a good indication that the region contains text, it is not perfect but it is fast.
The aim is for a given GIF file including several images inside to extract those image pixels, edit (change them) and put them back to the GIF file.
Trying to do it using giflib.
The language used is C.
I have successfully read the Gif file and have an access to the pixels of image using the following code:
GifFileType *gifFile = DGifOpenFileName(filename);
DGifSlurp(gifFile);
But as it is said in the Documentation:
About the DGifSlurp function:
When you have modified the image to taste, write it out with
EGifSpew().
However using that function results in:
GIF-LIB error: Given file was not opened for write.
In the following code:
GifFileType *gifFile = DGifOpenFileName(filename);
DGifSlurp(gifFile);
EGifSpew(gifFile);
Do you know how to save the edited gif image?
Your doc is outdated. You should take a look here: http://giflib.sourceforge.net/gif_lib.html#idp26995312
You can write to a GIF file through a function hook. Initialize with
GifFileType *EGifOpen(void *userPtr, OutputFunc writeFunc, int *ErrorCode)
and see the library header file for the type of OutputFunc.
Moreover the function EGifSpew() takes a fd in second argument to a gifFile.
Hope it helped.
i would like to know how can i cut a jpg file using a coordinates i want to retrieve using artoolkit and opencv, see:
Blob Detection
i want to retrieve coordinates of the white sheet and then use those coordinates to cut a jpg file I'm took before.
Find this but how can this help?
How to slice/cut an image into pieces
If you already have the coordinates, you might want to deskew the image first:
http://nuigroup.com/?ACT=28&fid=27&aid=1892_H6eNAaign4Mrnn30Au8d
This post uses cv::warpPerspective() to achieve that effect.
The references above use the C++ interface of OpenCV, but I'm sure you are capable of converting between the two.
Second, cutting a particular area of an image is known as extracting a Region Of Interest (ROI). The general procedure is: create a CvRect to define your ROI and then call cvSetImageROI() followed by cvSaveImage() to save it on the disk.
This post shares C code to achieve this task.
I have an IP-camera that serves images. These images are then processed via EmguCV and then I want to display the processed images.
To show the images, I use this code:
Window1(){
...
this.Dispatcher.Hooks.DispatcherInactive
+= new EventHandler(Hooks_DispatcherInactive);
}
Hooks_DispatcherInactive(...)
{
Next()
}
Next() the calls calls the image processing methods and (should) display the image:
MatchResult? result = survey.Step();
if (result.HasValue)
{
Bitmap bit = result.Value.image.Bitmap;
ImageSource src = ConvertBitmap(bit);
show.Source = src;
...
}
This works fine when I hook up a normal 30fps webcam. But, the IPCam's images take over a second to get here, also when I access them via a browser. So, in the mean time, WPF shows nothing, not even the previous image that was processed.
How can I get WPF to at least show the previous image?
You can copy the image's buffer into a new BitmapSource image of the same format (PixelFormat, Height, Width, stride) using Create (from Array) or Create (from IntPtr) and display that BitmapSource in WPF's Image control,
or you can use DirectX to do that faster (for 30fps (and 1fps) the BitmapSource approach should do).
Also, consider NOT using events in the view, instead use bindings and commands.