I am filling a Frame with a BGR image for encoding, and I am getting a memory leak. I think I got to the source of the problem but it appears to be a library issue instead. Since FFmpeg is such a mature library, I think I am misusing it and I would like to be instructed on how to do it correctly.
I am allocating a Frame using:
AVFrame *bgrFrame = av_frame_alloc();
And later I allocate the image in the Frame using:
av_image_alloc(bgrFrame->data, bgrFrame->linesize, bgrFrame->width, bgrFrame->height, AV_PIX_FMT_BGR24, 32);
Then I fill the image allocated using:
av_image_fill_pointers(bgrFrame->data, AV_PIX_FMT_BGR24, bgrFrame->height, originalBGRImage.data, bgrFrame->linesize);
Where originalBGRImage is an OpenCV Mat.
And this has a memory leak, apparently, av_image_alloc() allocates memory, and av_image_fill_pointers() also allocates memory, on the same pointers (I can see bgrFrame->data[0] changing between calls).
If I call
av_freep(&bgrFrame->data[0]);
After av_image_alloc(), it's fine, but if I call it after av_image_fill_pointers(), the program crashes, even though bgrFrame->data[0] is not NULL, which I find very curious.
Looking FFmpeg's av_image_alloc() source code, I see it calls av_image_fill_pointers() twice inside it, once allocating a buffer buff....and later in av_image_fill_pointers() source code, data[0] is substituted by the image pointer, which is (I think) the source of the memory leak, since data[0] was holding buf from the previous av_image_alloc() call.
So this brings the final question: What's the correct way of filling a frame with an image?.
You should allocate your frame once.
AVFrame* alloc_picture(enum PixelFormat pix_fmt, int width, int height)
{
AVFrame *f = avcodec_alloc_frame();
if (!f)
return NULL;
int size = avpicture_get_size(pix_fmt, width, height);
uint8_t *buffer = (uint8_t *) av_malloc(size);
if (!buffer) {
av_free(f);
return NULL;
}
avpicture_fill((AVPicture *)f, buffer, pix_fmt, width, height);
return f;
}
Yes, the cast (AVPicture*) is allowed https://stackoverflow.com/a/20498359/2079934 .
In subsequent frames, you can write into the this frame. Since your OpenCV raw data is BGR and you need RGB or YUV, you can use sws_scale. In my application, I mirror vertically:
struct SwsContext* convertCtx = sws_getContext(width, height, PIX_FMT_RGB24, c->width, c->height, c->pix_fmt, SWS_FAST_BILINEAR, NULL, NULL, NULL);
avpicture_fill(&pic_raw, (uint8_t*)pixelBuffer, PIX_FMT_RGB24, width, height);
// flip
pic_raw.data[0] += (height - 1) * pic_raw.linesize[0];
pic_raw.linesize[0] *= -1;
sws_scale(convertCtx, pic_raw.data, pic_raw.linesize, 0, height, f->data, f->linesize);
out_size = avcodec_encode_video(c, outputBuffer, outputBufferSize, f);
(You can adapt PIX_FMT_RGB24 to your needs and read from cv::Mat without copying data.)
av_fill_arrays() does the job. It will fill the frame's data[] and linesizes[] but not reallocating any memory.
Too late for answer. But after take many hours, i want to share.
In document
/**
* AVBuffer references backing the data for this frame. All the pointers in
* data and extended_data must point inside one of the buffers in buf or
* extended_buf. This array must be filled contiguously -- if buf[i] is
* non-NULL then buf[j] must also be non-NULL for all j < i.
*
* There may be at most one AVBuffer per data plane, so for video this array
* always contains all the references. For planar audio with more than
* AV_NUM_DATA_POINTERS channels, there may be more buffers than can fit in
* this array. Then the extra AVBufferRef pointers are stored in the
* extended_buf array.
*/
AVBufferRef *buf[AV_NUM_DATA_POINTERS];
Then buf is "smart pointer" for data (extended_buf for extended_data)
for example, i using image one linesize only
int size = av_image_get_buffer_size(AVPixelFormat::AV_PIX_FMT_BGRA, width, height, 1);
AVBufferRef* dataref = av_buffer_alloc(size);//that for av_frame_unref
memcpy(dataref->data, your_buffer, size );
AVFrame* frame = av_frame_alloc();
av_image_fill_arrays(frame->data, frame->linesize, dataref->data, AVPixelFormat::AV_PIX_FMT_BGRA, source->width, source->height, 1
frame->buf[0] = dataref;
av_frame_unref will unref frame->buf and free pointer if ref count to zero
Related
Hey sorry for this novice question but I think im just missing something obvious... Would be very happy with the some guidance on this:
Inline docu of esp_camera.h:
/**
* #brief Data structure of camera frame buffer
*/
typedef struct {
uint8_t * buf; /*!< Pointer to the pixel data */
size_t len; /*!< Length of the buffer in bytes */
size_t width; /*!< Width of the buffer in pixels */
size_t height; /*!< Height of the buffer in pixels */
pixformat_t format; /*!< Format of the pixel data */
} camera_fb_t;
plus extract of demo code:
from esp32 code:
//replace this with your own function
display_image(fb->width, fb->height, fb->pixformat, fb->buf, fb->len);
code getting framebuffer
camera_fb_t * fb = NULL;
esp_err_t res = ESP_OK;
fb = esp_camera_fb_get(); // framebuffer in grayscale
and feed fb buffer into imagebuffer
int w, h;
int i, count;
uint8_t *imagebuffer = quirc_begin(qr, &w, &h);
//Feed 'fb' into 'imagebuffer' somehow?
//-------------------------------
// ----- DUMMY CODE?! not the proper way? ----
imagebuffer = fb->buf; //fb's own buf field, holding the pixel data
//Comment from quirc below:
/* Fill out the image buffer here.
* 'imagebuffer' is a pointer to a w*h bytes.
* One byte per pixel, w pixels per line, h lines in the buffer.
*/
//
quirc_end(qr);
Inline comment docu of quirc below:
/* These functions are used to process images for QR-code recognition.
* quirc_begin() must first be called to obtain access to a buffer into
* which the input image should be placed. Optionally, the current
* width and height may be returned.
*
* After filling the buffer, quirc_end() should be called to process
* the image for QR-code recognition. The locations and content of each
* code may be obtained using accessor functions described below.
*/
uint8_t *quirc_begin(struct quirc *q, int *w, int *h);
void quirc_end(struct quirc *q);
https://github.com/dlbeer/quirc
I've looked through the code, source files etc, but as i'm a novice ive no clue how to merge or feed the one into the other.
Could anyone point me into the right direction here? Am not dirty of looking through heaps of code but my inexperience with C is the issue here :S Thanks!
Author of the library was kind enough to explain it,
posting the code answer here as it may help others:
int w, h;
int i, count;
uint8_t *buff = quirc_begin(qr, &w, &h);
//
int total_pixels = w * h;
for (int i = 0; i < total_pixels; i++) {
// grab a pixel from your source image at element i
// convert it somehow, then store it
buff[i] = fb->buf[i]; //?
}
//
quirc_end(qr);
count = quirc_count(qr);
Serial.println("count found codes:");
Serial.println(count);
github issue with lib author's explaination
I am trying to use OpenGL to capture the back buffer of my computer's screen, and then H.264 encode the buffer using FFMPEG's libavcodec library. The issue I'm having is that I would like to encode the video in AV_PIX_FMT_420P, but the back buffer capture function provided by OpenGL, glReadPixels(), only supports formats like GL_RGB. As you can see below, I try to use FFMPEG's swscale() function to convert from RGB to YUV, but the following code crashes at the swscale() line. Any ideas on how I can encode the OpenGL backbuffer?
// CAPTURE BACK BUFFER USING OPENGL
int width = 1280, height = 720;
BYTE* pixels = (BYTE *) malloc(sizeof(BYTE));
glReadPixels(0, 720, width, height, GL_RGB, GL_UNSIGNED_BYTE, pixels);
//CREATE FFMPEG VARIABLES
avcodec_register_all();
AVCodec *codec;
AVCodecContext *context;
struct SwsContext *sws;
AVPacket packet;
AVFrame *frame;
codec = avcodec_find_encoder(AV_CODEC_ID_H264);
context = avcodec_alloc_context3(encoder->codec);
context->dct_algo = FF_DCT_FASTINT;
context->bit_rate = 400000;
context->width = width;
context->height = height;
context->time_base.num = 1;
context->time_base.den = 30;
context->gop_size = 1;
context->max_b_frames = 1;
context->pix_fmt = AV_PIX_FMT_YUV420P;
avcodec_open2(context, codec, NULL);
// CONVERT TO YUV AND ENCODE
int frame_size = avpicture_get_size(AV_PIX_FMT_YUV420P, out_width, out_height);
encoder->frame_buffer = malloc(frame_size);
avpicture_fill((AVPicture *) encoder->frame, (uint8_t *) encoder->frame_buffer, AV_PIX_FMT_YUV420P, out_width, out_height);
sws = sws_getContext(in_width, in_height, AV_PIX_FMT_RGB32, out_width, out_height, AV_PIX_FMT_YUV420P, SWS_FAST_BILINEAR, 0, 0, 0);
uint8_t *in_data[1] = {(uint8_t *) pixels};
int in_linesize[1] = {width * 4};
// PROGRAM CRASHES HERE
sws_scale(encoder->sws, in_data, in_linesize, 0, encoder->in_height, encoder->frame->data, encoder->frame->linesize);
av_free_packet(&packet);
av_init_packet(&packet);
int success;
avcodec_encode_video2(context, &packet, frame, &success);
Your pixels buffer is too small; you malloc only one BYTE instead of width*height*4 bytes:
BYTE* pixels = (BYTE *) malloc(width*height*4);
Your glReadPixels call is also incorrect:
Passing y=720 causes it to read outside the window. Remember that OpenGL coordinate system has the y-axis pointing upwards.
AV_PIX_FMT_RGB32 expects four bytes per pixel, whereas GL_RGB writes three bytes per pixel, therefore you need GL_RGBA or GL_BGRA.
Of the two I'm pretty sure that it should be GL_BGRA: AV_PIX_FMT_RGB32 treats pixels as 32-bit integers, therefore on little-endian blue comes first. OpenGL treats each channel as a byte, therefore it should be GL_BGRA to match.
To summarize try:
glReadPixels(0, 0, width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
Additionally, due to OpenGL y-axis pointing upwards but ffmpeg y-axis pointing downwards you may need to flip the image. It can be done with the following trick:
uint8_t *in_data[1] = {(uint8_t *) pixels + (height-1)*width*4}; // address of the last line
int in_linesize[1] = {- width * 4}; // negative stride
I am working on a program to stack astronomical images. It will involve having to store hundreds of 16 bit RGB images in memory for processing. I can not just handle the images one at a time since in some cases I want to take the median value for each pixel. I decided to create temporary files and mmap my images to them so I can use my image api like nothing changed between the image being in normal memory or being mmaped since the kernel should handle accessing the necessary parts of the file behind the scenes. I am posting this since I just wanted to make sure I am doing it correctly. Will this approach make it possible to load 50gb of images (hypothetically) if I only have 16GB of ram + my 8GB swap partition?
typedef struct
{
size_t w;
size_t h;
pixel_t px[];
} image_t;
//allocate in normal memory
image_t* image_new(size_t w, size_t h)
{
assert(w && h);
image_t* img = malloc(sizeof(image_t) + sizeof(pixel_t) * w * h);
if(!img)
return NULL;
img->w = w;
img->h = h;
memset(img->px, 0, sizeof(pixel_t) * w * h);
return img;
}
image_t* image_mmap_new(image_t* img)
{
FILE* tmp = tmpfile();
if(tmp == 0)
return NULL;
size_t wsize = sizeof(image_t) + sizeof(pixel_t) * img->w * img->h;
if(fwrite(img, 1, wsize, tmp) != wsize)
{
fclose(tmp);
return NULL;
}
image_t* nimg = mmap(NULL, wsize, PROT_WRITE | PROT_READ, MAP_SHARED, fileno(tmp), 0);
fclose(tmp);
return nimg;
}
As far as i know the only limit is the address space. Once mapped, a file is accessed like a memory region and only the portions actually used are loaded into memory.
More information in this (old but yet valid) GNU Library doc:
http://www.gnu.org/software/libc/manual/html_node/index.html#toc-Introduction-1
Actually it seems that I am too conservative, you can map even larger file: How big can a memory-mapped file be?
All, I wrote a very simple OpenCL kernel which transforms an RGB image to gray scale using simple averaging.
Some background:
The image is stored in mapped memory, as a 24 bit, non padded memory block
The output array is stored in pinned memory (mapped with clEnqueueMapBuffer) and is 8 bpp
There are two buffers allocated on the device (clCreateBuffer), one is specifically read (which we clWriteBuffer into before the kernel starts) and the other is specifically write (which we clReadBuffer after the kernel finishes)
I am running this on a 1280x960 image. A serial version of the algorithm averages 60ms, the OpenCL kernel averages 200ms!!! I'm doing something wrong but I have no idea how to proceed, what to optimize. (Timing my reads/writes without a kernel call, the algorithm runs in 15ms)
I am attaching the kernel setup (sizes and arguments) as well as the kernel
EDIT: So I wrote an even dumber kernel, that does no global memory accesses inside it, and it was only 150ms... This is still ridiculously slow. I thought maybe I'm messing up with global memory reads, they have to be 4 byte aligned or something? Nope...
Edit 2: Removing the all parameters from my kernel gave me significant speed up... I'm confused I thought that since I'm clEnqueueWriteBuffer the kernel should be doing no memory transfer from host->device and device->host....
Edit 3: Figured it out, but I still don't understand why. If anyone could explain it I would be glad to award correct answer to them. The problem was passing the custom structs by value. It looks like I'll need to allocate a global memory location for them and pass their cl_mems
Kernel Call:
//Copy input to device
result = clEnqueueWriteBuffer(handles->queue, d_input_data, CL_TRUE, 0, h_input.widthStep*h_input.height, (void *)input->imageData, 0, 0, 0);
if(check_result(result, "opencl_rgb_to_gray", "Failed to write to input buffer on device!")) return 0;
//Set kernel arguments
result = clSetKernelArg(handles->current_kernel, 0, sizeof(OpenCLImage), (void *)&h_input);
if(check_result(result, "opencl_rgb_to_gray", "Failed to set input struct.")) return 0;
result = clSetKernelArg(handles->current_kernel, 1, sizeof(cl_mem), (void *)&d_input_data);
if(check_result(result, "opencl_rgb_to_gray", "Failed to set input data.")) return 0;
result = clSetKernelArg(handles->current_kernel, 2, sizeof(OpenCLImage), (void *)&h_output);
if(check_result(result, "opencl_rgb_to_gray", "Failed to set output struct.")) return 0;
result = clSetKernelArg(handles->current_kernel, 3, sizeof(cl_mem), (void *)&d_output_data);
if(check_result(result, "opencl_rgb_to_gray", "Failed to set output data.")) return 0;
//Determine run parameters
global_work_size[0] = input->width;//(unsigned int)((input->width / (float)local_work_size[0]) + 0.5);
global_work_size[1] = input->height;//(unsigned int)((input->height/ (float)local_work_size[1]) + 0.5);
printf("Global Work Group Size: %d %d\n", global_work_size[0], global_work_size[1]);
//Call kernel
result = clEnqueueNDRangeKernel(handles->queue, handles->current_kernel, 2, 0, global_work_size, local_work_size, 0, 0, 0);
if(check_result(result, "opencl_rgb_to_gray", "Failed to run kernel!")) return 0;
result = clFinish(handles->queue);
if(check_result(result, "opencl_rgb_to_gray", "Failed to finish!")) return 0;
//Copy output
result = clEnqueueReadBuffer(handles->queue, d_output_data, CL_TRUE, 0, h_output.widthStep*h_output.height, (void *)output->imageData, 0, 0, 0);
if(check_result(result, "opencl_rgb_to_gray", "Failed to write to output buffer on device!")) return 0;
Kernel:
typedef struct OpenCLImage_t
{
int width;
int widthStep;
int height;
int channels;
} OpenCLImage;
__kernel void opencl_rgb_kernel(OpenCLImage input, __global unsigned char* input_data, OpenCLImage output, __global unsigned char * output_data)
{
int pixel_x = get_global_id(0);
int pixel_y = get_global_id(1);
unsigned char * cur_in_pixel, *cur_out_pixel;
float avg = 0;
cur_in_pixel = (unsigned char *)(input_data + pixel_y*input.widthStep + pixel_x * input.channels);
cur_out_pixel = (unsigned char *)(output_data + pixel_y*output.widthStep + pixel_x * output.channels);
avg += cur_in_pixel[0];
avg += cur_in_pixel[1];
avg+= cur_in_pixel[2];
avg /=3.0f;
if(avg > 255.0)
avg = 255.0;
else if(avg < 0)
avg = 0;
*cur_out_pixel = avg;
}
Overhead of copying the value to all the threads that will be created might be the possible reason for the time; where as for a global memory the reference will be enough in the other case. The only the SDK implementer will be able to answer exactly.. :)
You may want to try a local_work_size like [64, 1, 1] in order to coalesce your memory calls. (note that 64 is a diviser of 1280).
As previously said, you have to use a profiler in order to get more informations. Are you using an nvidia card ? Then download CUDA 4 (not 5), as it contains an openCL profiler.
Your performance must be far from the optimum. Change the local work size, the global work size, try to treat two or four pixels per tread. Can you change the way pixels are stored privous to your treatment? Then break your struct for tree arrays in order to coalesce memomry access more effectively.
Tou can hide your memory transfers with the GPU work: it will be more easy to do with a profiler near you.
Ok, so far, I can create an array on the host computer (of type float), and copy it to the gpu, then bring it back to the host as another array (to test if the copy was successful by comparing to the original).
I then create a CUDA array from the array on the GPU. Then I bind that array to a CUDA texture.
I now want to read that texture back and compare with the original array (again to test that it copied correctly). I saw some sample code that uses the readTexel() function shown below. It doesn't seem to work for me... (basically everything works except for the section in the bindToTexture(float* deviceArray) function starting at the readTexels(SIZE, testArrayDevice) line).
Any suggestions of a different way to do this? Or are there some obvious problems I missed in my code?
Thanks for the help guys!
#include <stdio.h>
#include <assert.h>
#include <cuda.h>
#define SIZE 20;
//Create a channel description to use.
cudaChannelFormatDesc channelDesc = cudaCreateChannelDesc(32, 0, 0, 0, cudaChannelFormatKindFloat);
//Create a texture to use.
texture<float, 2, cudaReadModeElementType> cudaTexture;
//cudaTexture.filterMode = cudaFilterModeLinear;
//cudaTexture.normalized = false;
__global__ void readTexels(int amount, float *Array)
{
int index = blockIdx.x * blockDim.x + threadIdx.x;
if (index < amount)
{
float x = tex1D(cudaTexture, float(index));
Array[index] = x;
}
}
float* copyToGPU(float* hostArray, int size)
{
//Create pointers, one for the array to be on the device, and one for bringing it back to the host for testing.
float* deviceArray;
float* testArray;
//Allocate some memory for the two arrays so they don't get overwritten.
testArray = (float *)malloc(sizeof(float)*size);
//Allocate some memory for the array to be put onto the GPU device.
cudaMalloc((void **)&deviceArray, sizeof(float)*size);
//Actually copy the array from hostArray to deviceArray.
cudaMemcpy(deviceArray, hostArray, sizeof(float)*size, cudaMemcpyHostToDevice);
//Copy the deviceArray back to testArray in host memory for testing.
cudaMemcpy(testArray, deviceArray, sizeof(float)*size, cudaMemcpyDeviceToHost);
//Make sure contents of testArray match the original contents in hostArray.
for (int i = 0; i < size; i++)
{
if (hostArray[i] != testArray[i])
{
printf("Location [%d] does not match in hostArray and testArray.\n", i);
}
}
//Don't forget free these arrays after you're done!
free(testArray);
return deviceArray; //TODO: FREE THE DEVICE ARRAY VIA cudaFree(deviceArray);
}
cudaArray* bindToTexture(float* deviceArray)
{
//Create a CUDA array to translate deviceArray into.
cudaArray* cuArray;
//Allocate memory for the CUDA array.
cudaMallocArray(&cuArray, &cudaTexture.channelDesc, SIZE, 1);
//Copy the deviceArray into the CUDA array.
cudaMemcpyToArray(cuArray, 0, 0, deviceArray, sizeof(float)*SIZE, cudaMemcpyHostToDevice);
//Release the deviceArray
cudaFree(deviceArray);
//Bind the CUDA array to the texture.
cudaBindTextureToArray(cudaTexture, cuArray);
//Make a test array on the device and on the host to verify that the texture has been saved correctly.
float* testArrayDevice;
float* testArrayHost;
//Allocate memory for the two test arrays.
cudaMalloc((void **)&testArray, sizeof(float)*SIZE);
testArrayHost = (float *)malloc(sizeof(float)*SIZE);
//Read the texels of the texture to the test array in the device.
readTexels(SIZE, testArrayDevice);
//Copy the device test array to the host test array.
cudaMemcpy(testArrayHost, testArrayDevice, sizeof(float)*SIZE, cudaMemcpyDeviceToHost);
//Print contents of the array out.
for (int i = 0; i < SIZE; i++)
{
printf("%f\n", testArrayHost[i]);
}
//Free the memory for the test arrays.
free(testArrayHost);
cudaFree(testArrayDevice);
return cuArray; //TODO: UNBIND THE CUDA TEXTURE VIA cudaUnbindTexture(cudaTexture);
//TODO: FREE THE CUDA ARRAY VIA cudaFree(cuArray);
}
int main(void)
{
float* hostArray;
hostArray = (float *)malloc(sizeof(float)*SIZE);
for (int i = 0; i < SIZE; i++)
{
hostArray[i] = 10.f + i;
}
float* deviceAddy = copyToGPU(hostArray, SIZE);
free(hostArray);
return 0;
}
Briefly:
------------- in your main.cu ---------------------------------------------------------------------------------------
-1. Define the texture as a globlal variable
texture refTexture; // global variable !
// meaning: address the texture with (x,y) (2D) and get an unsinged int
In the main function:
-2. Use arrays combined with texture
cudaArray* myArray; // declar.
// ask for memory
cudaMallocArray ( &myArray,
&refTex.channelDesc, /* with this you don't need to fill a channel descriptor */
width,
height);
-3. copy data from CPU to GPU (to the array)
cudaMemcpyToArray ( arrayCudaEntrada, // destination: the array
0, 0, // offsets
sourceData, // pointer uint*
widthheightsizeof(uint), // total amount of bytes to be copied
cudaMemcpyHostToDevice);
-4. bind texture and array
cudaBindTextureToArray( refTex,arrayCudaEntrada)
-5. change some parameters in the texture
refTextura_In.normalized = false; // don't automatically convert fetched data to [0,1[
refTextura_In.addressMode[0] = cudaAddressModeClamp; // if my indexing is out of bounds: automatically use a valid indexing (0 if negative index, last if too great index)
refTextura_In.addressMode[1] = cudaAddressModeClamp;
---------- in the kernel --------------------------------------------------------
// find out indexes (f,c) to process by this thread
uint f = (blockIdx.x * blockDim.x) + threadIdx.x;
uint c = (blockIdx.y * blockDim.y) + threadIdx.y;
// this is curious and necessary: indexes for reading from a texture
// are floats !. Even if you are certain to access (4,5) you have
// match the "center" this is (4.5, 5.5)
uint read = tex2D( refTex, c+0.5f, f+0.5f); // texRef is a global variable
Now You process read and write the results to other zone of the device global
memory, not to the texture itself !
readTexels() is a kernel (__global__) function, i.e. it runs on the GPU. Therefore you need to use the correct syntax to launch a kernel.
Take a look through the CUDA Programming Guide and some of the SDK samples, both available via the NVIDIA CUDA site to see how to launch a kernel.
Hint: It'll end up something like readTexels<<<grid,block>>>(...)