Saving a GdkPixbuf from a DBus client/server result in different files - dbus

I have a DBus service that creates a Variant type for a Pixbuf file that when saved on either side gives two different files, despite the data being the same. The image saved from the server side is correct, the one on the client side shows the top 1/3 correct, middle 1/3 shifted horizontally by 1/3 of the width and the colors are wonky, and the bottom 1/3 shifted by 2/3 the width and has different wonky colors.
The Variant on the server side is created thusly
var image_var = new Variant ("(iiibi^ay)",
width,
height,
stride,
has_alpha,
bits_per_sample,
data);
and unpacked by the client using
Variant data_var = null;
image.get ("(iiibi#ay)",
&width,
&height,
&stride,
&has_alpha,
&bits_per_sample,
&data_var);
On both sides I print things about the pixbuf, including a checksum. The server side gives
Width: 1024
Height: 768
Stride: 3072
Bits/Sample: 8
Has Alpha: false
Data Length: 786432
Data Checksum: e1facf66095e46d7ca3338b6438c1939
and the client
Width: 1024
Height: 768
Stride: 3072
Bits/Sample: 8
Has Alpha: false
Data Length: 786432
Data Checksum: e1facf66095e46d7ca3338b6438c1939
Everything is definitely the same, the call to save the image for both is
pixbuf.save (filename, "jpeg", "quality", "100", null);
This has been tested and wonkiness has been verified on three different computers. I will provide a complete example, likely tomorrow. I just wanted to put this out there first without it in case someone has come across this before.

Sending large data blobs like images in D-Bus messages is not what D-Bus is designed for — it’s intended for control messages, not large data messages. You’ll get bad performance, and will likely hit the D-Bus message size limits for larger images. See Passing a large data structure over dbus for an example of that.
Instead, you should send a handle to the image data. D-Bus provides functionality for this in the form of the file descriptor type (type string h), which allows you to pass a file descriptor for the image data from one process to another. The file descriptor could be an unnamed pipe, or could be an open read-only file, for example.

Related

Streaming OGG Flac to Icecast with libshout

I have a simple streamer developed in C++ using LibFlac and LibShout to stream to Icecast server.
The flac Encoder is created in the following way:
m_encoder = FLAC__stream_encoder_new();
FLAC__stream_encoder_set_channels(m_encoder, 2);
FLAC__stream_encoder_set_ogg_serial_number(m_encoder, rand());
FLAC__stream_encoder_set_bits_per_sample(m_encoder, 16);
FLAC__stream_encoder_set_sample_rate(m_encoder, in_samplerate);
FLAC__stream_encoder_init_ogg_stream(m_encoder, NULL, writeByteArray, NULL, NULL, NULL, this);
The function writeByteArray sends encoded data to Icecast using shout_send_raw function from libshout.
shout_send_raw returns an actual number of bytes being sent, so I assume that it works as it should, no error is happening.
The problem is Icecast server does not stream the data that I send. I see followint in the log:
[2018-02-15 15:31:47] DBUG stats/modify_node_event update "/radio" total_bytes_read (20735897)
[2018-02-15 15:31:47] DBUG stats/modify_node_event update "/radio" total_bytes_sent (0)
I see that Icecast receives the data, but it does not send it to connected clients. The mount point is radio and when I try to connect to that mount using any media player - it does nothing, no playback.
So my question is how is that possible that Icecast receives the data but does not send it to connected clients?
Maybe some additional libshout configuration is required, here is how I configure it:
shout_set_format( m_ShoutData, SHOUT_FORMAT_OGG_AUDIO );
shout_set_mime( m_ShoutData, "application/ogg" );
Any help would be appreciated.
To sum up the solution from the comments:
FLAC has a very significantly higher bitrate than any other commonly used audio codec. Thus the default settings will NOT work. The queue size must be increased significantly so that complete data frames will fit in it, otherwise, Icecast will not sync on the stream and refuse to data out to clients.
The same obviously applies to streaming video. The queue size must be adjusted either for the appropriate mountpoints or globally.

Serving a file efficiently using Play 2.3

I need to serve some content from an Action in the form of a file: basically, I am creating CSV content on the fly and sending it to the client.
I cannot do it using sendFile, since the file does not really exist; I tried using the chunked transfer, but I get a really slow response (in localhost I got the file at about 100KB/s, which I think is really strange).
Is there a way for me to set the content type and write the response "line by line", without having to specify the content length "a priori"?
Here's one way using a simple predefined Enumerator that will produce the response from bytes written to an OutputStream:
def csv = Action {
val enumerator = Enumerator.outputStream { out =>
out.write(...)
// Keep writing to the Enumerator
out.close()
}
Ok.chunked(enumerator.andThen(Enumerator.eof)).withHeaders(
"Content-Type" -> "text/csv",
"Content-Disposition" -> s"attachment; filename=test.csv"
)
}
This is simple enough for relatively small files (or if the process of generating the file is slow by nature), however note that from the documentation this has no back-pressure, reading a large file into the OutputStream can quickly fill up memory if the client can't download it quickly enough.
Update:
After testing this some more it seems like the size of the Byte arrays you write to the OutputStream make a huge difference in throughput.
Using this sample stream:
val s = Stream.continually(0.toByte)
Writing in chunks of 1KB to the OutputStream like this resulted in 6MB/s of throughput:
(0 until 1024*1024).foreach{i =>
out.write(s.take(1024).toArray)
}
However if I only write 10 bytes at a time, the throughput slows to less than 100KB/s. So my suggestion for using this method to write CSVs in a chunked form would be to write multiple rows at a time to the OutputStream rather than one row at a time.

What is the most performant way to render unmanaged video frames in WPF?

I'm using FFmpeg library to receive and decode H.264/MPEG-TS over UDP with minimal latency (something MediaElement can't handle).
On a dedicated FFmpeg thread, I'm pulling PixelFormats.Bgr32 video frames for display. I've already tried InteropBitmap:
_section = CreateFileMapping(INVALID_HANDLE_VALUE, IntPtr.Zero, PAGE_READWRITE, 0, size, null);
_buffer = MapViewOfFile(_section, FILE_MAP_ALL_ACCESS, 0, 0, size);
Dispatcher.Invoke((Action)delegate()
{
_interopBitmap = (InteropBitmap)Imaging.CreateBitmapSourceFromMemorySection(_section, width, height, PixelFormats.Bgr32, (int)size / height, 0);
this.Source = _interopBitmap;
});
And then per frame update:
Dispatcher.Invoke((Action)delegate()
{
_interopBitmap.Invalidate();
});
But performance is quite bad (skipping frames, high CPU usage etc).
I've also tried WriteableBitmap: FFmpeg is placing frames in _writeableBitmap.BackBuffer and per frame update:
Dispatcher.Invoke((Action)delegate()
{
_writeableBitmap.Lock();
});
try
{
ret = FFmpegInvoke.sws_scale(...);
}
finally
{
Dispatcher.Invoke((Action)delegate()
{
_writeableBitmap.AddDirtyRect(_rect);
_writeableBitmap.Unlock();
});
}
Experiencing almost the same performance issues (tested with various DispatcherPriority).
Any help will be greatly appreciated.
I know it is too late, but I write this answer for those folks who are struggling to solve this problem.
Recently, I have done a rendering project using InteropBitmap in which I was able to run about 16 media player components in a WPF window at the same time on a core i7 1.6 Ghz CPU +8Gb RAM Laptop, with 25fps.
Here are the tips I took for performance tweaking:
First of all, I did not let GC handle my video packets. I Allocated memory using Marshal.AllocateHGlobal wherever I needed to instantiate a video frame and Disposed using Marshal.FreeHGlobal as soon as I did the rendering.
Secondly, I created a dispatcher thread for each individual media player. For more information, read "https://blogs.msdn.microsoft.com/dwayneneed/2007/04/26/multithreaded-ui-hostvisual/".
Thirdly, for aspect ratio, and generally the resizing purposes, I used native EmguCV library. This library helped me a lot on performance rather than using bitmaps and overlays and etc.
I think these steps help everyone that needs to render manually using InteropBitmap or etc.

Win32 clipboard and alpha channel images

My application should be able to copy 32-bit images (RGB + alpha channel) to the clipboard and paste these images from the clipboard. For this I plan to use CF_DIBV5 because the BITMAPV5HEADER structure has a field bV5AlphaMask.
The problem is that there doesn't seem to be a consensus as to how exactly the image data should be stored in the clipboard. While doing some tests I found out that there are several differences betweeen the applications making it next to impossible to come up with a general solution.
Here are my observations:
When I copy an alpha channel image from Word 2010 or XnView to the clipboard, it is stored without premultiplying pixel data.
When I copy an image using Firefox or Chrome, however, the pixel data seems to be premultiplied by the alpha channel.
Firefox sets bV5AlphaMask to 0xff000000 whereas most other applications do not set this at all but keep it 0. This is strange because these applications put DIBs onto the clipboard that actually contain an alpha channel in the highest 8 bits but still they set bV5AlphaMask to 0. So one has to make the assumption that if bit depth is 32 that there is an alpha channel even if bV5AlphaMask is 0.
To cut a long story short my basic question is this: Is there some official information as to how alpha channel data should be stored on the clipboard? I'm especially interested to find out whether or not the data must be premultiplied. As you can see above, Word 2010 and XnView do not premultiply, while Firefox and Chrome do. But it is of essential importance to know whether or not the color channels should be premultiplied.
Thanks a lot for shedding some light onto this!
UPDATE 2
Pasting into Paint.NET works fine now. It was caused by a bug in my code which did not set the color channels to 0 if the alpha channel was 0, i.e. the premultiplication wasn't done correctly in this case which seems to have confused Paint.NET.
Still unsolved is the problem with Internet Explorer 10. When copying a PNG with alpha channel to the clipboard, IE 10 just puts a 24-bit CF_DIBV5 on the clipboard but Paint.NET can paste this bitmap WITH alpha channel so there must be another format that IE 10 exposes to the clipboard. Maybe it exposes a PNG uses CFSTR_FILECONTENTS and CFSTR_FILEDESCRIPTOR.
UPDATE
I've now implemented it in the way described by arx below and it works pretty well. However, there are still two things that keep me puzzled:
1) Pasting alpha channel images from my app into Paint.NET doesn't preserve the alpha channel. The image appears opaque in Paint.NET. HOWEVER, pasting from Firefox and Chrome into Paint.NET works perfectly, the alpha channel is preserved! I've dumped the complete DIBV5 and it is identical to my app, but still it works with FF and Chrome but not with my app so there must be something else to it! Firefox and Chrome must be doing something else that my app doesn't do!?
2) The same is true for Internet Explorer 10. Pasting an alpha channel image from IE 10 to my app doesn't work at all... I'm getting a DIB that has a bit depth of 24, i.e. no alpha channel at all. When pasting from IE 10 to Paint.NET, however, the alpha channel is there! So there must be something more to it here as well...
I'm sure there is a right way of storing the alpha in CF_DIBV5, but it really doesn't matter. Applications already handle it inconsistently, so if you want your application to play nicely with others you can't use CF_DIBV5.
I researched copying and pasting transparent bitmaps a while ago. My aim was to successfully paste a transparent bitmap into two versions of Office and GIMP. I looked at several possible formats:
CF_BITMAP
Transparency is always ignored.
CF_DIB
Using 32bpp BI_RGB in the usual 0xAARRGGBB format. GIMP supports this but nothing else does.
CF_DIBV5
GIMP doesn't support this.
"PNG"
Paste supported: GIMP, Word 2000, Excel 2000, Excel 2007 and PowerPoint 2007.
Paste unsupported: Word 2007 and OneNote 2007.
All of these applications successfully export "PNG" if you copy a bitmap.
However, Word and OneNote 2007 will paste a PNG file copied from Explorer. So I came up with the following:
Solution for Copying
Convert your transparent bitmap to PNG format.
Advertise the following clipboard formats:
"PNG" - the raw PNG data.
CF_DIB - for applications (like paint) that don't handle transparency.
CFSTR_FILEDESCRIPTOR - make the PNG look like a file. The file descriptor should have an invented filename with a ".png" extension.
CFSTR_FILECONTENTS - the contents must be exposed as an IStream; just using an HGLOBAL doesn't seem to work. The data is identical to the "PNG" data.
Having done this I could successfully paste transparent bitmaps into GIMP, Office 2000 and Office 2007. You can also paste the PNG directly into an Explorer folder.
Update
I realised that I've only answered half the question. This is great for copying, but no use if you want to paste from an application that only copies CF_DIBV5 (like Firefox).
I'd recommend that you use "PNG" if it's available, otherwise fall back to CF_DIBV5, treating it as premultiplied. This will correctly handle Word 2010 (which exports "PNG"), Firefox and Chrome. XnView only exports non-multiplied CF_DIBV5, so this won't work correctly. I'm not sure you can do any better.
lscf - A Tool for Exploring Clipboard Formats
This is the source of a tool for displaying a list of available clipboard formats. It can also write one to a file. I called it lscf. Create a win32 console application in Visual Studio and paste this source over the main function. It has one very minor bug: it never displays the "Unknown format" error if you mistype a format name.
#include <Windows.h>
#include <stdio.h>
#include <tchar.h>
#define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0]))
LPCTSTR cfNames[] = {
_T("CF_TEXT"),
_T("CF_BITMAP"),
_T("CF_METAFILEPICT"),
_T("CF_SYLK"),
_T("CF_DIF"),
_T("CF_TIFF"),
_T("CF_OEMTEXT"),
_T("CF_DIB"),
_T("CF_PALETTE"),
_T("CF_PENDATA"),
_T("CF_RIFF"),
_T("CF_WAVE"),
_T("CF_UNICODETEXT"),
_T("CF_ENHMETAFILE"),
_T("CF_HDROP"),
_T("CF_LOCALE"),
_T("CF_DIBV5")
};
int LookupFormat(LPCTSTR name)
{
for (int i = 0; i != ARRAY_SIZE(cfNames); ++i)
{
if (_tcscmp(cfNames[i], name) == 0)
return i + 1;
}
return RegisterClipboardFormat(name);
}
void PrintFormatName(int format)
{
if (!format)
return;
if ((format > 0) && (format <= ARRAY_SIZE(cfNames)))
{
_tprintf(_T("%s\n"), cfNames[format - 1]);
}
else
{
TCHAR buffer[100];
if (GetClipboardFormatName(format, buffer, ARRAY_SIZE(buffer)))
_tprintf(_T("%s\n"), buffer);
else
_tprintf(_T("#%i\n"), format);
}
}
void WriteFormats()
{
int count = 0;
int format = 0;
do
{
format = EnumClipboardFormats(format);
if (format)
{
++count;
PrintFormatName(format);
}
}
while (format != 0);
if (!count)
_tprintf(_T("Clipboard is empty!\n"));
}
void SaveFormat(int format, LPCTSTR filename)
{
HGLOBAL hData = (HGLOBAL)GetClipboardData(format);
LPVOID data = GlobalLock(hData);
HANDLE hFile = CreateFile(filename, GENERIC_WRITE, 0, 0, CREATE_ALWAYS, 0, 0);
if (hFile != INVALID_HANDLE_VALUE)
{
DWORD bytesWritten;
WriteFile(hFile, data, GlobalSize(hData), &bytesWritten, 0);
CloseHandle(hFile);
}
GlobalUnlock(hData);
}
int _tmain(int argc, _TCHAR* argv[])
{
if (!OpenClipboard(0))
{
_tprintf(_T("Cannot open clipboard\n"));
return 1;
}
if (argc == 1)
{
WriteFormats();
}
else if (argc == 3)
{
int format = LookupFormat(argv[1]);
if (format == 0)
{
_tprintf(_T("Unknown format\n"));
return 1;
}
SaveFormat(format, argv[2]);
}
else
{
_tprintf(_T("lscf\n"));
_tprintf(_T("List available clipboard formats\n\n"));
_tprintf(_T("lscf CF_NAME filename\n"));
_tprintf(_T("Write format CF_NAME to file filename\n\n"));
}
CloseClipboard();
return 0;
}
I was stuck on this problem for a while despite the detailed main answer. It would not seem to preserve alpha (even through a clipboard viewer).
It turns out, the solution is as simple as this:
export CF_DIB (no need for V5) with 32-bit pre-multiplied alpha
and export the "PNG" format
With that, it seemed to be able to paste in all applications I tested (Paint.NET, GIMP, LibreOffice, and so forth).
Essentially, as long as alpha was pre-multiplied, alpha was preserved in CF_DIB in almost every program I used. In a rare one-off case, "PNG" was needed.
To be clear: CF_DIBV5 was not needed.

MJPEG internet streaming - accurate fps

I want to write MJPEG picture internet stream viewer. I think getting jpeg images using sockets it's not very hard problem. But i want to know how to make accurate streaming.
while (1)
{
get_image()
show_image()
sleep (SOME_TIME) // how to make it accurate?
}
Any suggestions would be great.
In order to make it accurate, there are two possibilities:
Using framerate from the streaming server. In this case, the client needs to keep the same framerate (calculate each time when you get frame, then show and sleep for a variable amount of time using feedback: if the calculated framerate is higher than on server -> sleep more; if lower -> sleep less; then, the framerate on the client side will drift around the original value from server). It can be received from server during the initialization of streaming connection (when you get picture size and other parameters) or it can be configured.
The most accurate approach, actually, is using of timestamps from server per each frame (which is either taken from file by demuxer or generated in image sensor driver in case of camera device). If MJPEG is packeted into RTP stream, these timestamps are already in RTP header. So, client's task is trivial: show picture using time calculating from time offset, current timestamp and time base.
Update
For the first solution:
time_to_sleep = time_to_sleep_base = 1/framerate;
number_of_frames = 0;
time = current_time();
while (1)
{
get_image();
show_image();
sleep (time_to_sleep);
/* update time to sleep */
number_of_frames++;
cur_time = current_time();
cur_framerate = number_of_frames/(cur_time - time);
if (cur_framerate > framerate)
time_to_sleep += alpha*time_to_sleep;
else
time_to_sleep -= alpha*time_to_sleep;
time = cur_time;
}
, where alpha is a constant parameter of reactivity of the feedback (0.1..0.5) to play with.
However, it's better to organize queue for input images to make the process of showing smoother. The size of queue can be parametrized and could be somewhere around 1 sec time of showing, i.e. numerically equal to framerate.

Resources