Is it possible to directly read/write to a WriteableBitmap's pixel data? I'm currently using WriteableBitmapEx's SetPixel() but it's slow and I want to access the pixels directly without any overhead.
I haven't used HTML5's canvas in a while, but if I recall correctly you could get its image data as a single array of numbers and that's kind of what I'm looking for
Thanks in advance
To answer your question, you can more directly access a writable bitmap's data by using the Lock, write, Unlock pattern, as demonstrated below, but it is typically not necessary unless you are basing your drawing upon the contents of the image. More typically, you can just create a new buffer and make it a bitmap, rather than the other way around.
That being said, there are many extensibility points in WPF to perform innovative drawing without resorting to pixel manipulation. For most controls, the existing WPF primitives (Border, Line, Rectangle, Image, etc...) are more than sufficient - don't be concerned about using many of them, they are rather cheap to use. For complex controls, you can use the DrawingContext to draw D3D primitives. For image effects, you can implement GPU assisted shaders using the Effect class or use the built in effects (Blur and Shadow).
But, if your situation requires direct pixel access, pick a pixel format and start writing. I suggest BGRA32 because it is easy to understand and is probably the most common one to be discussed.
BGRA32 means the pixel data is stored in memory as 4 bytes representing the blue, green, red, and alpha channels of an image, in that order. It is convenient because each pixel ends up on a 4 byte boundary, lending it to storage in an 32 bit integer. When dealing with a 32 bit integer, keep in mind the order will be reversed on most platforms (check BitConverter.IsLittleEndian to determine proper byte order at runtime if you need to support multiple platforms, x86 and x86_64 are both little endian)
The image data is stored in horizontal strips which are one stride wide which compose a single row the width of an image. The stride width is always greater than or equal to the pixel width of the image multiplied by the number of bytes per pixel in the format selected. Certain situations can cause the stride to be longer than the width * bytesPerPixel which are specific to certain architechtures, so you must use the stride width to calculate the start of a row, rather than multiplying the width. Since we are using a 4 byte wide pixel format, our stride does happen to be width * 4, but you should not rely upon it.
As mentioned, the only case I would suggest using a WritableBitmap is if you are accessing an existing image, so that is the example below:
Before / After:
// must be compiled with /UNSAFE
// get an image to draw on and convert it to our chosen format
BitmapSource srcImage = JpegBitmapDecoder.Create(File.Open("img13.jpg", FileMode.Open),
BitmapCreateOptions.None, BitmapCacheOption.OnLoad).Frames[0];
if (srcImage.Format != PixelFormats.Bgra32)
srcImage = new FormatConvertedBitmap(srcImage, PixelFormats.Bgra32, null, 0);
// get a writable bitmap of that image
var wbitmap = new WriteableBitmap(srcImage);
int width = wbitmap.PixelWidth;
int height = wbitmap.PixelHeight;
int stride = wbitmap.BackBufferStride;
int bytesPerPixel = (wbitmap.Format.BitsPerPixel + 7) / 8;
wbitmap.Lock();
byte* pImgData = (byte*)wbitmap.BackBuffer;
// set alpha to transparent for any pixel with red < 0x88 and invert others
int cRowStart = 0;
int cColStart = 0;
for (int row = 0; row < height; row++)
{
cColStart = cRowStart;
for (int col = 0; col < width; col++)
{
byte* bPixel = pImgData + cColStart;
UInt32* iPixel = (UInt32*)bPixel;
if (bPixel[2 /* bgRa */] < 0x44)
{
// set to 50% transparent
bPixel[3 /* bgrA */] = 0x7f;
}
else
{
// invert but maintain alpha
*iPixel = *iPixel ^ 0x00ffffff;
}
cColStart += bytesPerPixel;
}
cRowStart += stride;
}
wbitmap.Unlock();
// if you are going across threads, you will need to additionally freeze the source
wbitmap.Freeze();
However, it really isn't necessary if you are not modifying an existing image. For example, you can draw a checkerboard pattern using all safe code:
Output:
// draw rectangles
int width = 640, height = 480, bytesperpixel = 4;
int stride = width * bytesperpixel;
byte[] imgdata = new byte[width * height * bytesperpixel];
int rectDim = 40;
UInt32 darkcolorPixel = 0xffaaaaaa;
UInt32 lightColorPixel = 0xffeeeeee;
UInt32[] intPixelData = new UInt32[width * height];
for (int row = 0; row < height; row++)
{
for (int col = 0; col < width; col++)
{
intPixelData[row * width + col] = ((col / rectDim) % 2) != ((row / rectDim) % 2) ?
lightColorPixel : darkcolorPixel;
}
}
Buffer.BlockCopy(intPixelData, 0, imgdata, 0, imgdata.Length);
// compose the BitmapImage
var bsCheckerboard = BitmapSource.Create(width, height, 96, 96, PixelFormats.Bgra32, null, imgdata, stride);
And you don't really even need an Int32 intermediate, if you write to the byte array directly.
Output:
// draw using byte array
int width = 640, height = 480, bytesperpixel = 4;
int stride = width * bytesperpixel;
byte[] imgdata = new byte[width * height * bytesperpixel];
// draw a gradient from red to green from top to bottom (R00 -> ff; Gff -> 00)
// draw a gradient of alpha from left to right
// Blue constant at 00
for (int row = 0; row < height; row++)
{
for (int col = 0; col < width; col++)
{
// BGRA
imgdata[row * stride + col * 4 + 0] = 0;
imgdata[row * stride + col * 4 + 1] = Convert.ToByte((1 - (col / (float)width)) * 0xff);
imgdata[row * stride + col * 4 + 2] = Convert.ToByte((col / (float)width) * 0xff);
imgdata[row * stride + col * 4 + 3] = Convert.ToByte((row / (float)height) * 0xff);
}
}
var gradient = BitmapSource.Create(width, height, 96, 96, PixelFormats.Bgra32, null, imgdata, stride);
Edit: apparently, you are trying to use WPF to make some sort of image editor. I would still be using WPF primitives for shapes and source bitmaps, and then implement translations, scaling, rotation as RenderTransform's, bitmap effects as Effect's and keep everything within the WPF model. But, if that does not work for you, we have many other options.
You could use WPF primitives to render to a RenderTargetBitmap which has a chosen PixelFormat to use with WritableBitmap as below:
Canvas cvRoot = new Canvas();
// position primitives on canvas
var rtb = new RenderTargetBitmap(width, height, dpix, dpiy, PixelFormats.Bgra32);
var wb = new WritableBitmap(rtb);
You could use a WPF DrawingVisual to issue GDI style commands then render to a bitmap as demonstrated on the sample on the RenderTargetBitmap page.
You could use GDI using an InteropBitmap created using System.Windows.Interop.Imaging.CreateBitmapSourceFromHBitmap from an HBITMAP retrieved from a Bitmap.GetHBitmap method. Make sure you don't leak the HBITMAP, though.
After a nice long headache, I found this article that explains a way to do it without using bit arithmetic, and allows me to treat it as an array instead:
unsafe
{
IntPtr pBackBuffer = bitmap.BackBuffer;
byte* pBuff = (byte*)pBackBuffer.ToPointer();
pBuff[4 * x + (y * bitmap.BackBufferStride)] = 255;
pBuff[4 * x + (y * bitmap.BackBufferStride) + 1] = 255;
pBuff[4 * x + (y * bitmap.BackBufferStride) + 2] = 255;
pBuff[4 * x + (y * bitmap.BackBufferStride) + 3] = 255;
}
You can access the raw pixel data by calling the Lock() method and using the BackBuffer property afterwards. When you're finished, don't forget to call AddDirtyRect and Unlock.
For a simple example, you can take a look at this: http://cscore.codeplex.com/SourceControl/latest#CSCore.Visualization/WPF/Utils/PixelManipulationBitmap.cs
Related
I am currently developing a tile-based game in C and I'm trying to implement zooming using the nearest neighbor algorithm.
This is how the algorithm it looks right now:
u32 *DestPixel = (u32 *)DestBitmap.Pixels;
u32 *SourcePixel = (u32 *)SourceBitmap->Pixels;
f64 RatioX = (f64)SourceBitmap->Width / (f64)DestBitmap.Width;
f64 RatioY = (f64)SourceBitmap->Height / (f64)DestBitmap.Height;
for(s32 Y = 0;
Y < DestBitmap.Height;
++Y)
{
for(s32 X = 0;
X < DestBitmap.Width;
++X)
{
s32 ScaledX = (s32)(X * RatioX);
s32 ScaledY = (s32)(Y * RatioY);
s32 DestOffset = Y*DestBitmap.Width + X;
s32 SourceOffset = ScaledY*SourceBitmap->Width + ScaledX;
*(DestPixel + DestOffset) = *(SourcePixel + SourceOffset);
}
}
However, it is not producing the results I want. When trying to convert the source bitmap (64x64) to a bitmap whose size is not a power of 2 (in this case 30x30), the scaled bitmap looks weird on the right side. Left side is the original bitmap while the right side is the scaled bitmap:
What I've tried:
Rounding ScaledX and ScaledY (instead of truncating)
Flooring ScaledX and ScaledY (instead of truncating)
I actually solved it. It was the rendering of the bitmaps that the problem was, not the alogrithm. I was drawing 4 pixels at a time in my rendering code which doesn't work with 30x30, but only with power of 2s.
I'm loading a .png file using OpenCV and I want to extract its blue intensity values using thrust library.
My code goes like this:
Loading an image using OpenCV IplImage pointer
Copying the image data into thrust::device_vector
Extracting the blue intensity values from the device vector inside a structure using thrust library.
Now I have a problem in extracting Blue Intensity values from the device vector.
I did this code in cuda already now converting it using thrust library.
I fetch blue intensity values inside this function.
I want to know how to call this struct FetchBlueValues from the main function.
Code:
#define ImageWidth 14
#define ImageHeight 10
thrust::device_vector<int> BinaryImage(ImageWidth*ImageHeight);
thrust::device_vector<int> ImageVector(ImageWidth*ImageHeight*3);
struct FetchBlueValues
{
__host__ __device__ void operator() ()
{
int index = 0 ;
for(int i=0; i<= ImageHeight*ImageWidth*3 ; i = i+3)
{
BinaryImage[index]= ImageVector[i];
index++;
}
}
};
void main()
{
src = cvLoadImage("../Input/test.png", CV_LOAD_IMAGE_COLOR);
unsigned char *raw_ptr,*out_ptr;
raw_ptr = (unsigned char*) src->imageData;
thrust::device_ptr<unsigned char> dev_ptr = thrust::device_malloc<unsigned char>(ImageHeight*src->widthStep);
thrust::copy(raw_ptr,raw_ptr+(src->widthStep*ImageHeight),dev_ptr);
int index=0;
for(int j=0;j<ImageHeight;j++)
{
for(int i=0;i<ImageWidth;i++)
{
ImageVector[index] = (int) dev_ptr[ (j*src->widthStep) + (i*src->nChannels) + 0 ];
ImageVector[index+1] = (int) dev_ptr[ (j*src->widthStep) + (i*src->nChannels) + 1 ];
ImageVector[index+2] = (int) dev_ptr[ (j*src->widthStep) + (i*src->nChannels) + 2 ];
index +=3 ;
}
}
}
Since the image is stored in pixel format, and each pixel includes distinct colors, there is a natural "stride" in accessing the individual color components of each pixel. In this case, it appears that the color components of a pixel are stored in three successive int quantities per pixel, so the access stride for a given color component would be three.
An example strided range access iterator methodology is covered here.
I want to draw grid as in the below picture.
I know a trick to draw this by draw 6 vertical and horizontal lines instead of 6 x 6 small rectangle.
But if I want to have smaller zoom (zoom for viewing picture), the lines are many. For example, say my view window is of size 800 x 600 and viewing a picture of size 400 x 300 (so zoom in is 2). There will be 400 x 300 rectangle of size 2 x 2 (each rectangle represents a pixel).
If I draw each cell (in a loop, say 400 x 300 times), it is very slow (when I move the window...).
Using the trick solves the problem.
By I am still curious if there is a better way to do this task in winapi, GDI(+). For example, a function like DrawGrid(HDC hdc, int x, int y, int numOfCellsH, int numOfCellsV)?
A further question is: If I don't resize, move the window or I don't change the zoom in, the grid won't be changed. So even if I update the picture continuously (capture screen), it is uncessary to redraw the grid. But I use StretchBlt and BitBlt to capture the screen (to memory DC then hdc of the window), if I didn't redraw the grid in memory DC, then the grid will disappear. Is there a way to make the grid stick there and update the bitmap of the screen capture?
ps: This is not a real issue. Since I want to draw the grid when zoom is not less than 10 (so each cell is of size 10 x 10 or larger). In this case, there will be at most 100 + 100 = 200 lines to draw and it is fast. I am just curious if there is a faster way.
Have you considered using CreateDIBSection this will allow you a pointer so that you can manipulate the R, G, B values rapidly, for example the following creates a 256x256x24 bitmap and paints a Green squares at 64 pixel intervals:
BITMAPINFO BI = {0};
BITMAPINFOHEADER &BIH = BI.bmiHeader;
BIH.biSize = sizeof(BITMAPINFOHEADER);
BIH.biBitCount = 24;
BIH.biWidth = 256;
BIH.biHeight = 256;
BIH.biPlanes = 1;
LPBYTE pBits = NULL;
HBITMAP hBitmap = CreateDIBSection(NULL, &BI, DIB_RGB_COLORS, (void**) &pBits, NULL, 0);
LPBYTE pDst = pBits;
for (int y = 0; y < 256; y++)
{
for (int x = 0; x < 256; x++)
{
BYTE R = 0;
BYTE G = 0;
BYTE B = 0;
if (x % 64 == 0) G = 255;
if (y % 64 == 0) G = 255;
*pDst++ = B;
*pDst++ = G;
*pDst++ = R;
}
}
HDC hMemDC = CreateCompatibleDC(NULL);
HGDIOBJ hOld = SelectObject(hMemDC, hBitmap);
BitBlt(hdc, 0, 0, 256, 256, hMemDC, 0, 0, SRCCOPY);
SelectObject(hMemDC, hOld);
DeleteDC(hMemDC);
DeleteObject(hBitmap);
Generally speaking, the major limiting factors for these kinds of graphics operations are the fill rate and the number of function calls.
The fill rate is how fast the machine can change the pixel values. In general, blits (copying rectangular areas) are very fast because they're highly optimized and designed to touch memory in a cache friendly order. But a blit touches all the pixels in that region. If you're going to overdraw or if most of those pixels don't really need to change, then it's likely more efficient to draw just the pixels you need, even if that's not quite as cache-friendly.
If you're drawing n primitives by making n things, then that might be a limiting factor as n gets large, and it could make sense to look for an API call that lets you draw several (or all) of the lines at once.
Your "trick" demonstrates both of these optimizations. Drawing 20 lines is fewer calls than 100 rectangles, and it touches far fewer pixels. And as the window grows or your grid size decreases, the lines approach will increase linearly both in number of calls and in pixels touched while the rectangle method will grow as n^2.
I don't think you can do any better when it comes to touching the minimum number of pixels. But I suppose the number of function calls might become a factor if you're drawing very many lines. I don't know GDI+, but in plain GDI, there are functions like Polyline and PolyPolyline which will let you draw several lines in one call.
I would like to know how the vertices of glVertex2f(x, y) map to actual screen integer co-ordinates.
I intend to use a complex plane with minR, minI and maxR, maxI (I and R - Imaginary and Real part), such that the plane gets mapped to 512 x 512 pixels on the screen. I have points of 512 steps between the min and max values.
The mapping between the vertices is unclear since, I had to scale the my planar image using glScalef(100, 100, 0) to get it roughly fit the screen. But still, a large portion of it is left blank.
Please note that I am using the glBegin(GL_POINTS) routine to map the points in the plane to the screen.
The code looks thus,
for (X = 0; X < 512; X++)
for (Y = 0; Y < 512; Y++)
glVertex2f (Complexplane[X][Y].real, Complexplane[X][Y].imag);
P.S.:
Complexplane[0][0].real = -2, Complexplane[0][0].imag = -1.2
Complexplane[511][511].real = 1.0, Complexplane[0][0].imag = 1.8
I'm assuming you haven't set the projection or modelview matrices - they will be set to the identity matrix by default BTW...
For X,Y coordinates, a point will be visible if: -1 <= X <= 1, -1 <= Y <= 1
The glViewport function describes how this range is mapped to the window. It is initially set to (0, 0, window_width, window_height) when the GL context is created. The fact that glScale(100, 100, 0) is only taking up a portion of the window suggests that you are applying another transform elsewhere.
The mapping depends on the transformation matrices set. In up to OpenGL-2 the pipeline is
v_eye = ModelviewMatrix * v
v_projected = ProjectionMatrix * v_eye
v_clipped = clip(v_projected)
v_NDC.xyzw = v_clipped.xyzw / v_clipped.w
The default matrices are identity, so the only operation applied in the default state is the clipping. v_NDC then undergoes the viewport transform:
p.xyz = (v_NDC.xyz + 1) * viewport.wh / 2 + viewport.xy
int main()
{
image_double image;
ntuple_list out;
unsigned int xsize,ysize,depth;
int x,y,i,j,width,height,step;
uchar *p;
IplImage* img = 0;
IplImage* dst = 0;
img = cvLoadImage("D:\\Ahram.jpg",CV_LOAD_IMAGE_COLOR);
width = img->width;
height = img->height;
dst=cvCreateImage(cvSize(width,height),IPL_DEPTH_8U,1);
cvCvtColor(img,dst,CV_RGB2GRAY);
width=dst->width;
height=dst->height;
step=dst->widthstep;
p=(uchar*)dst->imageData;
image=new_image_double(dst->width,dst->height);
xsize=dst->width;
for(i=0;i<height;i++)
{
for(j=0;j<width;j++)
{
image->data[i+j*xsize]=p[i*step+j];
}
}
/* call LSD */
out = lsd(dst);
/* print output */
printf("%u line segments found:\n",out->size);
for(i=0;i<out->size;i++)
{
for(j=0;j<out->dim;j++)
printf("%f ",out->values[ i * out->dim + j ]);
printf("\n");
}
/* free memory */
free_image_double(image);
free_ntuple_list(out);
return 0;
}
N.B:it has no errors but when i run it gives out an LSD internal error:invalid image input
Start by researching how PGM is structured:
Each PGM image consists of the following:
1. A "magic number" for identifying the file type.
A pgm image's magic number is the two characters "P5".
2. Whitespace (blanks, TABs, CRs, LFs).
3. A width, formatted as ASCII characters in decimal.
4. Whitespace.
5. A height, again in ASCII decimal.
6. Whitespace.
7. The maximum gray value (Maxval), again in ASCII decimal.
Must be less than 65536, and more than zero.
8. A single whitespace character (usually a newline).
9. A raster of Height rows, in order from top to bottom.
Each row consists of Width gray values, in order from left to right.
Each gray value is a number from 0 through Maxval, with 0 being black
and Maxval being white. Each gray value is represented in pure binary
by either 1 or 2 bytes. If the Maxval is less than 256, it is 1 byte.
Otherwise, it is 2 bytes. The most significant byte is first.
For PGM type P2, pixels are readable (ASCII) on the file, but for P5 they won't be because they will be stored in binary format.
One important thing you should know, is that this format takes only 1 channel per pixel. This means PGM can only store GREY scaled images. Remember this!
Now, if you're using OpenCV to load images from a file, you should load them using CV_LOAD_IMAGE_GRAYSCALE:
IplImage* cv_img = cvLoadImage("chairs.png", CV_LOAD_IMAGE_GRAYSCALE);
if(!cv_img)
{
std::cout << "ERROR: cvLoadImage failed" << std::endl;
return -1;
}
But if you use any other flag on this function or if you create an image with cvCreateImage(), or if you're capturing frames from a camera or something like that, you'll need to convert each frame to its grayscale representation using cvCvtColor().
I downloaded lsd-1.5 and noticed that there is an example there that shows how to use the library. One of the source code files, named lsd_cmd.c, manually reads a PGM file and assembles an image_double with it. The function that does this trick is read_pgm_image_double(), and it reads the pixels from a PGM file and stores them inside image->data. This is important because if the following does not work, you'll have to iterate on the pixels of IplImage and do this yourself.
After successfully loading a gray scaled image into IplImage* cv_img, you can try to create the structure you need with:
image_double image = new_image_double(cv_img->width, cv_img->height);
image->data = (double) cv_img->imageData;
In case this doesn't work, you'll need to check the file I suggested above and iterate through the pixels of cv_img->imageData and copy them one by one (doing the proper type conversion) to image->data.
At the end, don't forget to free this resource when you're done using it:
free_image_double(image);
This question helped me some time ago. You probably solved it already so sorry for the delay but i'm sharing now the answer.
I'm using lsd 1.6 and the lsd interface is a little different from the one you are using (they changed the lsd function interface from 1.5 to 1.6).
CvCapture* capture;
capture = cvCreateCameraCapture (0);
assert( capture != NULL );
//get capture properties
int width = cvGetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH);
int height = cvGetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT);
//create OpenCV image structs
IplImage *frame;
IplImage *frameBW = cvCreateImage( cvSize( width, height ), IPL_DEPTH_8U, 1 );
//create LSD image type
double *image;
image = (double *) malloc( width * height * sizeof(double) );
while (1) {
frame = cvQueryFrame( capture );
if( !frame ) break;
//convert to grayscale
cvCvtColor( frame , frameBW, CV_RGB2GRAY);
//cast into LSD image type
uchar *data = (uchar *)frameBW->imageData;
for (i=0;i<width;i++){
for(j=0;j<height;j++){
image[ i + j * width ] = data[ i + j * width];
}
}
//run LSD
double *list;
int n;
list = lsd( &n, image, width, height );
//DO PROCESSING DRAWING ETC
//draw segments on frame
for (int j=0; j<n ; j++){
//define segment end-points
CvPoint pt1 = cvPoint(list[ 0 + j * 7 ],list[ 1 + j * 7 ]);
CvPoint pt2 = cvPoint(list[ 2 + j * 7 ],list[ 3 + j * 7 ]);
// draw line segment on frame
cvLine(frame,pt1,pt2,CV_RGB(255,0,0),1.5,8,0);
}
cvShowImage("FRAME WITH LSD",frame);
//free memory
free( (void *) list );
char c = cvWaitKey(1);
if( c == 27 ) break; // ESC QUITS
}
//free memory
free( (void *) image );
cvReleaseImage( &frame );
cvReleaseImage( &frameBW );
cvDestroyWindow( "FRAME WITH LSD");
Hope this helps you or someone in the future! LSD works really great.