Load data to GdkPixbufLoader from g_input_stream_read - c

I load some data from file:
GInputStream* input_stream;
GFile *file = g_file_new_for_path(file_path);
input_stream = g_file_read(file,generator_cancellable ,NULL);
g_input_stream_read(input_stream, buffer, sizeof (buffer),generator_cancellable,error);
How can i load g_input_stream_read function result to the GdkPixbufLoader object?
Thank you.

You need to create a new GdkPixbufLoader and pass the data you read from GInputStream to it:
GdkPixbufLoader *loader = gdk_pixbuf_loader_new ();
gint num_bytes = g_input_stream_read (input_stream, buffer, ...);
gdk_pixbuf_loader_write (loader, buffer, num_bytes, error);
However, this makes sense if you perform reading asynchronously or in chunks (to e.g. show a progressively loaded JPEG or PNG). If you just read all the data at once in a blocking manner, use simpler gdk_pixbuf_new_from_stream().

Related

How to allocate memory when using custom reading functions in libpng?

I'm in need of reading base64 encoded PNG image, stored as char array/null terminated string, and I'm stuck. Here is what I have found out for now:
Libpng is capable of changing it's workings, by using png_set_*_fn().
reading functions must have prototype alike this one : void user_read_data(png_structp png_ptr, png_bytep data, size_t length); and must check for EOF errors.
Original read function (which reads from png file directly) calls an fread function and dumps everything to memory pointed by data. I have no idea how libpng knows about image size.
So, here is my implementation of read function
size_t base64_to_PNG(const char *const base64_png, png_bytep out)
{
size_t encoded_size, decoded_count;
size_t decoded_size = base64_decoded_block_size(base64_png, &encoded_size);
decoded_count = base64_decode_block(base64_png, encoded_size, (char*)out);
if(decoded_count != decoded_size)
return 0;
return decoded_size;
}
void my_read_png_from_data_uri(png_structp png_ptr, png_bytep data, size_t length)
{
const char *base64_encoded_png = NULL;
size_t PNG_bytes_len;
if(png_ptr == NULL)
return;
base64_encoded_png = png_get_io_ptr(png_ptr);
PNG_bytes_len = base64_to_PNG(base64_encoded_png, data);
if(PNG_bytes_len != length)
png_error(png_ptr, "Error occured during decoding of the image data");
}
I do believe that information about the decoded image size is lost, and I'm going straight to the segfault with that, as I'll be writing to some random address, but I have no idea how to tell libpng how much memory I need. Can you please help me with that?

How to stream Apache Arrow RecordBatches in C?

I read some data from a PostgreSQL database, convert it into RecordBatches and try to send the data to a client. But I fail to properly understand the usage of Apache Arrow C/GLib.
My information sources are the C++ docs, the Apache Arrow C/GLib reference manual and the C/GLib Github files.
By following the usage description of Apache Arrow C++ and experimenting with the wrapper classes in C, I build this minimal example of writing out a RecordBatch into a buffer and (after theoretically sending and receiving the buffer) trying to read that buffer back into a RecordBatch. But it fails and i would be glad, if you could point out my mistakes!
I omitted the error catching for readability. The code errors out at creation of the GArrowRecordBatchStreamReader. If i use the arrowbuffer or the buffer from the top in creating the InputStream, the error reads [record-batch-stream-reader][open]: IOError: Expected IPC message of type schema but got record batch. If i use the testBuffer the error complains about an invalid IPC stream, so the data is just corrupt.
void testRecordbatchStream(GArrowRecordBatch *rb){
GError *error = NULL;
// Write Recordbatch
GArrowResizableBuffer *buffer = garrow_resizable_buffer_new(300, &error);
GArrowBufferOutputStream *bufferStream = garrow_buffer_output_stream_new(buffer);
long written = garrow_output_stream_write_record_batch(GARROW_OUTPUT_STREAM(bufferStream), rb, NULL, &error);
// Use buffer as plain bytes
void *data = garrow_buffer_get_data(GARROW_BUFFER(buffer));
size_t length = garrow_buffer_get_size(GARROW_BUFFER(buffer));
// Read plain bytes and test serialize function
GArrowBuffer *testBuffer = garrow_buffer_new(data, length);
GArrowBuffer *arrowbuffer = garrow_record_batch_serialize(rb, NULL, &error);
// Read RecordBatch from buffer
GArrowBufferInputStream *inputStream = garrow_buffer_input_stream_new(arrowbuffer);
GArrowRecordBatchStreamReader *sr = garrow_record_batch_stream_reader_new(GARROW_INPUT_STREAM(inputStream), &error);
GArrowRecordBatch *rb2 = garrow_record_batch_reader_read_next(sr, &error);
printf("Received RB: \n%s\n", garrow_record_batch_to_string(rb2, &error));
}
So my solution was to use the class GArrowRecordBatchStreamWriter and Reader, instead of the function garrow_output_stream_write_record_batch(), because the latter only writes a record batch without a stream header and schema. Furthermore one has to properly access the data of the GArrowBuffer after writing. (Again, error handling omitted)
GError *error = NULL;
GArrowResizableBuffer *buffer = garrow_resizable_buffer_new(4096, &error);
GArrowBufferOutputStream *bufferStream = garrow_buffer_output_stream_new(buffer);
GArrowSchema *schema = garrow_record_batch_get_schema(recordbatch);
GArrowRecordBatchStreamWriter *sw = garrow_record_batch_stream_writer_new(GARROW_OUTPUT_STREAM(bufferStream), schema, &error);
g_object_unref(bufferStream);
g_object_unref(schema);
gboolean test = garrow_record_batch_writer_write_record_batch(GARROW_RECORD_BATCH_WRITER(sw), recordbatch, &error);
GBytes *data = garrow_buffer_get_data(GARROW_BUFFER(buffer));
gint64 length = garrow_buffer_get_size(GARROW_BUFFER(buffer));
gsize datasize;
gconstpointer datap = g_bytes_get_data(data, &datasize);
GArrowBuffer *receivingBuffer = garrow_buffer_new(datap, datasize);
GArrowBufferInputStream *inputStream = garrow_buffer_input_stream_new(GARROW_BUFFER(receivingBuffer));
GArrowRecordBatchStreamReader *sr = garrow_record_batch_stream_reader_new(GARROW_INPUT_STREAM(inputStream), &error);
printf("Reading RecordBatch:\n");
GArrowRecordBatch *recordbatch2 = garrow_record_batch_reader_read_next(GARROW_RECORD_BATCH_READER(sr), &error);
printf("%s\n", garrow_record_batch_to_string(recordbatch2, &error));

FFMPEG remux sample without writing to file

Let's consider this very nice and easy to use remux sample by horgh.
I'd like to achieve the same task: convert an RTSP H264 encoded stream to a fragmented MP4 stream.
This code does exactly this task.
However I don't want to write the mp4 onto disk at all, but I need to get a byte buffer or array in C with the contents that would normally written to disk.
How is that achievable?
This sample uses vs_open_output to define the output format and this function needs an output url.
If I would get rid of outputting the contents to disk, how shall I modify this code?
Or there might be better alternatives as well, those are also welcomed.
Update:
As szatmary recommended, I have checked his example link.
However as I stated in the question I need the output as buffer instead of a file.
This example demonstrates nicely how can I read my custom source and give it to ffmpeg.
What I need is how can open the input as standard (with avformat_open_input) then do my custom modification with the packets and then instead writing to file, write to a buffer.
What have I tried?
Based on szatmary's example I created some buffers and initialization:
uint8_t *buffer;
buffer = (uint8_t *)av_malloc(4096);
format_ctx = avformat_alloc_context();
format_ctx->pb = avio_alloc_context(
buffer, 4096, // internal buffer and its size
1, // write flag (1=true, 0=false)
opaque, // user data, will be passed to our callback functions
0, // no read
&IOWriteFunc,
&IOSeekFunc
);
format_ctx->flags |= AVFMT_FLAG_CUSTOM_IO;
AVOutputFormat * const output_format = av_guess_format("mp4", NULL, NULL);
format_ctx->oformat = output_format;
avformat_alloc_output_context2(&format_ctx, output_format,
NULL, NULL)
Then of course I have created 'IOWriteFunc' and 'IOSeekFunc':
static int IOWriteFunc(void *opaque, uint8_t *buf, int buf_size) {
printf("Bytes read: %d\n", buf_size);
int len = buf_size;
return (int)len;
}
static int64_t IOSeekFunc (void *opaque, int64_t offset, int whence) {
switch(whence){
case SEEK_SET:
return 1;
break;
case SEEK_CUR:
return 1;
break;
case SEEK_END:
return 1;
break;
case AVSEEK_SIZE:
return 4096;
break;
default:
return -1;
}
return 1;
}
Then I need to write the header to the output buffer, and the expected behaviour here is to print "Bytes read: x":
AVDictionary * opts = NULL;
av_dict_set(&opts, "movflags", "frag_keyframe+empty_moov", 0);
av_dict_set_int(&opts, "flush_packets", 1, 0);
avformat_write_header(output->format_ctx, &opts)
In the last line during execution, it always runs into segfault, here is the backtrace:
#0 0x00007ffff7a6ee30 in () at /usr/lib/x86_64-linux-gnu/libavformat.so.57
#1 0x00007ffff7a98189 in avformat_init_output () at /usr/lib/x86_64-linux-gnu/libavformat.so.57
#2 0x00007ffff7a98ca5 in avformat_write_header () at /usr/lib/x86_64-linux-gnu/libavformat.so.57
...
The hard thing for me with the example is that it uses avformat_open_input.
However there is no such thing for the output (no avformat_open_ouput).
Update2:
I have found another example for reading: doc/examples/avio_reading.c.
There are mentions of a similar example for writing (avio_writing.c), but ffmpeg does not have this available (at least in my google search).
Is this task really this hard to solve? standard rtsp input to custom avio?
Fortunately ffmpeg.org is down. Great.
It was a silly mistake:
In the initialization part I called this:
avformat_alloc_output_context2(&format_ctx, output_format,
NULL, NULL)
However before this I already put the avio buffers into format_ctx:
format_ctx->pb = ...
Also, this line is unnecessary:
format_ctx = avformat_alloc_context();
Correct order:
AVOutputFormat * const output_format = av_guess_format("mp4", NULL, NULL);
avformat_alloc_output_context2(&format_ctx, output_format,
NULL, NULL)
format_ctx->pb = avio_alloc_context(
buffer, 4096, // internal buffer and its size
1, // write flag (1=true, 0=false)
opaque, // user data, will be passed to our callback functions
0, // no read
&IOWriteFunc,
&IOSeekFunc
);
format_ctx->flags |= AVFMT_FLAG_CUSTOM_IO;
format_ctx->oformat = output_format; //might be unncessary too
Segfault is gone now.
You need to write a AVIOContext implementation.

How to set jbytearray directly from file

I have the following native code that copies from a file into a buffer and then copies the
contents of that buffer into a jbytearray.
JNIEXPORT void JNICALL Java_com_test(JNIEnv * env, jobject){
int file_descriptor = 100;
JNIEnv * jni_env = env;
FILE* file = fdopen(file_descriptor, "r");
unsigned char* buffer;
int size_of_file = 1000000;
fread(buffer, 1, static_cast<size_t>(size_of_file), file);
imageArr = static_cast<jbyteArray>(jni_env->NewByteArray(static_cast<jsize> (size_of_file)));
jni_env->SetByteArrayRegion (imageArr, 0, static_cast<jsize>
(size_of_file ), (jbyte*)buffer);
}
As this code runs in a loop, I would like to optimize this as much as possible. Is there any way to directly read from the file to the jbyteArray? I am aware jbyteArray is a pointer to a struct. Is there any way to set the fields of this struct directly instead of using the setByteArrayRegion() function?
If not, is there any other function that I can use to read from a file to a jbytearray?
In short, no. You can probably do it, but it probably wont be much faster and if something with the implementation changed in the JVM your code would stop working. You are dealing with file I/O so I don't think SetByteArrayRegion is your real bottleneck here.

MagickCore writing image data to stdout rather than to a filename

I'm using MagickCore to generate images from scratch. I'm trying to save my Image as a PNG file, but whenever I call WriteImage, it outputs to standard out rather than to the filename that I specified. For example:
Image *image = ImageGenerator(...); // generates valid image
ImageInfo *info = CloneImageInfo (NULL);
info->file = NULL;
strcpy (info->filename, "test.png");
strcpy (info->magick, "png");
WriteImage (info, image);
When this code is used, it outputs PNG data to standard out rather than to test.png. Is there something else that I am missing?
The trick was to use the FILE * provided by the ImageInfo struct.
...
info->file = fopen ("test.png", "w+b");
strcpy (info->filename, "test.png");
strcpy (info->magick, "png");
...

Resources