Streaming OGG Flac to Icecast with libshout - c

I have a simple streamer developed in C++ using LibFlac and LibShout to stream to Icecast server.
The flac Encoder is created in the following way:
m_encoder = FLAC__stream_encoder_new();
FLAC__stream_encoder_set_channels(m_encoder, 2);
FLAC__stream_encoder_set_ogg_serial_number(m_encoder, rand());
FLAC__stream_encoder_set_bits_per_sample(m_encoder, 16);
FLAC__stream_encoder_set_sample_rate(m_encoder, in_samplerate);
FLAC__stream_encoder_init_ogg_stream(m_encoder, NULL, writeByteArray, NULL, NULL, NULL, this);
The function writeByteArray sends encoded data to Icecast using shout_send_raw function from libshout.
shout_send_raw returns an actual number of bytes being sent, so I assume that it works as it should, no error is happening.
The problem is Icecast server does not stream the data that I send. I see followint in the log:
[2018-02-15 15:31:47] DBUG stats/modify_node_event update "/radio" total_bytes_read (20735897)
[2018-02-15 15:31:47] DBUG stats/modify_node_event update "/radio" total_bytes_sent (0)
I see that Icecast receives the data, but it does not send it to connected clients. The mount point is radio and when I try to connect to that mount using any media player - it does nothing, no playback.
So my question is how is that possible that Icecast receives the data but does not send it to connected clients?
Maybe some additional libshout configuration is required, here is how I configure it:
shout_set_format( m_ShoutData, SHOUT_FORMAT_OGG_AUDIO );
shout_set_mime( m_ShoutData, "application/ogg" );
Any help would be appreciated.

To sum up the solution from the comments:
FLAC has a very significantly higher bitrate than any other commonly used audio codec. Thus the default settings will NOT work. The queue size must be increased significantly so that complete data frames will fit in it, otherwise, Icecast will not sync on the stream and refuse to data out to clients.
The same obviously applies to streaming video. The queue size must be adjusted either for the appropriate mountpoints or globally.

Related

get_multiple_points volttron RPC call

Any chance I could get a tip for proper way to build an agent that could do read multiple points from multiple devices on a BACnet system? I am viewing the actuator agent code trying learn how to make the proper rpc call.
So going through the agent development procedure with the agent creation wizard.
In the init I have this just hard coded at the moment:
def __init__(self, **kwargs):
super(Setteroccvav, self).__init__(**kwargs)
_log.debug("vip_identity: " + self.core.identity)
self.default_config = {}
self.agent_id = "dr_event_setpoint_adj_agent"
self.topic = "slipstream_internal/slipstream_hq/"
self.jci_zonetemp_string = "/ZN-T"
So the BACnet system in the building has JCI VAV boxes all with the same zone temperature sensor point self.jci_zonetemp_string and self.topic is how I pulled them into volttron/config store through BACnet discovery processes.
In my actuate point function (copied from CSV driver example) am I at all close for how to make the rpc call named reads using the get_multiple_points? Hoping to scrape the zone temperature sensor readings on BACnet device ID's 6,7,8,9,10 which are all the same VAV box controller with the same points/BAS program running.
def actuate_point(self):
"""
Request that the Actuator set a point on the CSV device
"""
# Create a start and end timestep to serve as the times we reserve to communicate with the CSV Device
_now = get_aware_utc_now()
str_now = format_timestamp(_now)
_end = _now + td(seconds=10)
str_end = format_timestamp(_end)
# Wrap the timestamps and device topic (used by the Actuator to identify the device) into an actuator request
schedule_request = [[self.ahu_topic, str_now, str_end]]
# Use a remote procedure call to ask the actuator to schedule us some time on the device
result = self.vip.rpc.call(
'platform.actuator', 'request_new_schedule', self.agent_id, 'my_test', 'HIGH', schedule_request).get(
timeout=4)
_log.info(f'*** [INFO] *** - SCHEDULED TIME ON ACTUATOR From "actuate_point" method sucess')
reads = publish_agent.vip.rpc.call(
'platform.actuator',
'get_multiple_points',
self.agent_id,
[(('self.topic'+'6', self.jci_zonetemp_string)),
(('self.topic'+'7', self.jci_zonetemp_string)),
(('self.topic'+'8', self.jci_zonetemp_string)),
(('self.topic'+'9', self.jci_zonetemp_string)),
(('self.topic'+'10', self.jci_zonetemp_string))]).get(timeout=10)
Any tips before I break something on the live system greatly appreciated :)
The basic form of an RPC call to the actuator is as follows:
# use the agent's VIP connection to make an RPC call to the actuator agent
result = self.vip.rpc.call('platform.actuator', <RPC exported function>, <args>).get(timeout=<seconds>)
Because we're working with devices, we need to know which devices we're interested in, and what their topics are. We also need to know which points on the devices that we're interested in.
device_map = {
'device1': '201201',
'device2': '201202',
'device3': '201203',
'device4': '201204',
}
building_topic = 'campus/building'
all_device_points = ['point1', 'point2', 'point3']
Getting points with the actuator requires a list of point topics, or device/point topic pairs.
# we only need one of the following:
point topics = []
for device in device_map.values():
for point in all_device_points:
point_topics.append('/'.join([building_topic, device, point]))
device_point_pairs = []
for device in device_map.values():
for point in all_device_points:
device_point_pairs.append(('/'.join([building_topic, device]),point,))
Now we send our RPC request to the actuator:
# can use instead device_point_pairs
point_results = self.vip.rpc.call('platform.actuator', 'get_multiple_points', point_topics).get(timeout=3)
maybe it's just my interpretation of your question, but it seems a little open-ended - so I shall respond in a similar vein - general (& I'll try to keep it short).
First, you need the list of info for targeting each device in-turn; i.e. it might consist of just a IP(v4) address (for the physical device) & the (logical) device's BOIN (BACnet Object Instance Number) - or if the request is been routed/forwarded on by/via a BACnet router/BACnet gateway then maybe also the DNET # & the DADR too.
Then you probably want - for each device/one at a time, to retrieve the first/0-element value of the device's Object-List property - in order to get the number of objects it contains, to allow you to know how many objects are available (including the logical device/device-type object) - that you need to retrieve from it/iterate over; NOTE: in the real world, as much as it's common for the device-type object to be the first one in the list, there's no guarantee it will always be the case.
As much as the BACnet standard started allowing for the retrieval of the Property-List (property) from each & every object, most equipment don't as-yet support it, so you might need your own idea of what properties (/at least the ones of interest to you) that each different object-type supports; probably at the very-very least know which ones/object-types support the Present-Value property & which ones don't.
One ideal would be to have the following mocked facets - as fakes for testing purposes instead of testing against a live/important device (- or at least consider testing against a noddy BACnet enabled Raspberry PI or the harware-based like):
a mock for your BACnet service
a mock for the BACnet communication stack
a mock for your device as a whole (- if you can't write your own one, then maybe even start with the YABE 'Room Control Simulator' as a starting point)
Hope this helps (in some way).

How to send/receive binary data over TCP using NodeMCU?

I've been trying to mount a custom protocol over the TCP module on the NodeMCU platform. However, the protocol I try to embed inside the TCP data segment is binary, not ASCII-based(like HTTP for example), so sometimes it contains a NULL char (byte 0x00) ending the C string inside the TCP module implementation, causing that part of the message inside the packet get lost.
-- server listens on 80, if data received, print data to console and send "hello world" back to caller
-- 30s time out for a inactive client
sv = net.createServer(net.TCP, 30)
function receiver(sck, data)
print(data)
sck:close()
end
if sv then
sv:listen(80, function(conn)
conn:on("receive", receiver)
conn:send("hello world")
end)
end
*This is a simple example which, as you can see, the 'receiver' variable is a callback function which prints the data from the TCP segment retrieved by the listener.
How can this be fixed? is there a way to circumvent this using the NodeMCU library? Or do I have to implement another TCP module or modify the current one's implementation to support arrays or tables as a return value instead of using strings?
Any suggestion is appreciated.
The data you receive in the callback should not be truncated. You can check this for yourself by altering the code as follows:
function receiver(sck, data)
print("Len: " .. #data)
print(data)
sck:close()
end
You will observe, that, while the data is indeed only printed up to the first zero byte (by the print()-function), the whole data is present in the LUA-String data and you can process it properly with 8-bit-safe (and zerobyte-safe) methods.
While it should be easy to modify the print()-function to also be zerobyte-safe, I do not consider this as a bug, since the print function is meant for texts. If you want to write binary data to serial, use uart.write(), i.e.
uart.write(0, data)

Read callbacks needed by AudioFileInitializeWithCallbacks? Apple AudioFile API

I am trying to write a lowish level audio writer with the AudioFile & ExtAudioFile APIs. I am creating a new audio file with AudioFileInitializeWithCallbacks but it appears that this needs read & get size callbacks implemented. Why can't this just accept a single write callback and trust that the data has been written sucessfully.
What if I am writing to a stream which I can not seek into such as a CD or a network socket?
Surely this should just continually push data to the write callback and it is my responsibility to write this data where needed returning an error code if the operation didn't succeed.
The docs for AudioFile_SetSizeProc and AudioFile_WriteProc appear to be incorrect as they both talk about read operations "inPosition An offset into the data from which to read.", "#result The callback should return the size of the data.".
At the moment I have got past this by only writing to a file but I get a kExtAudioFileError_InvalidOperationOrder after the first write procedure. What does this mean? There are no comments in the docs about it.
Any pointers or help would be much appriciated.
Apple documentation is wrong here. Check header file AudioFile.h:
/*!
#typedef AudioFile_SetSizeProc
#abstract A callback for setting the size of the file data. used with AudioFileOpenWithCallbacks or AudioFileInitializeWithCallbacks.
#discussion a function that will be called when AudioFile needs to set the size of the file data. This size is for all of the
data in the file, not just the audio data. This will only be called if the file is written to.
#param inClientData A pointer to the client data as set in the inClientData parameter to AudioFileXXXWithCallbacks.
#result The callback should return the size of the data.
*/
typedef OSStatus (*AudioFile_SetSizeProc)(
void * inClientData,
SInt64 inSize);

MJPEG internet streaming - accurate fps

I want to write MJPEG picture internet stream viewer. I think getting jpeg images using sockets it's not very hard problem. But i want to know how to make accurate streaming.
while (1)
{
get_image()
show_image()
sleep (SOME_TIME) // how to make it accurate?
}
Any suggestions would be great.
In order to make it accurate, there are two possibilities:
Using framerate from the streaming server. In this case, the client needs to keep the same framerate (calculate each time when you get frame, then show and sleep for a variable amount of time using feedback: if the calculated framerate is higher than on server -> sleep more; if lower -> sleep less; then, the framerate on the client side will drift around the original value from server). It can be received from server during the initialization of streaming connection (when you get picture size and other parameters) or it can be configured.
The most accurate approach, actually, is using of timestamps from server per each frame (which is either taken from file by demuxer or generated in image sensor driver in case of camera device). If MJPEG is packeted into RTP stream, these timestamps are already in RTP header. So, client's task is trivial: show picture using time calculating from time offset, current timestamp and time base.
Update
For the first solution:
time_to_sleep = time_to_sleep_base = 1/framerate;
number_of_frames = 0;
time = current_time();
while (1)
{
get_image();
show_image();
sleep (time_to_sleep);
/* update time to sleep */
number_of_frames++;
cur_time = current_time();
cur_framerate = number_of_frames/(cur_time - time);
if (cur_framerate > framerate)
time_to_sleep += alpha*time_to_sleep;
else
time_to_sleep -= alpha*time_to_sleep;
time = cur_time;
}
, where alpha is a constant parameter of reactivity of the feedback (0.1..0.5) to play with.
However, it's better to organize queue for input images to make the process of showing smoother. The size of queue can be parametrized and could be somewhere around 1 sec time of showing, i.e. numerically equal to framerate.

Video packet capture over multiple IP cameras

We are working on a C language application which is simple RTSP/RTP client to record video from Axis a number of Cameras. We launch a pthread for each of the camera which establishes the RTP session and begin to record the packets captured using the recvfrom() call.
A single camera single pthread records fine for well over a day without issues.
But testing with more cameras available,about 25(so 25 pthreads), the recording to file goes fine for like 15 to 20 mins and then the recording just stops. The application still keeps running. Its been over a month and a half we have been trying with varied implementations but nothing seems to help. Please provide suggestions.
We are using CentOS 5 platform
Define "record" Does that mean write data to a file? How do you control access to the file?
You can't have several threads all trying to write at the exact same time. So the comment by Alon seems to be pertinent. Your write access control machanism has problems.
void *IPThread(void *ptr)
{
//Establish RTSP session
//Bind to RTP ports(video)
//Increase Socket buffer size to 625KB
record_fd=open(record_name, O_CREAT|O_RDWR|O_TRUNC, 0777);
while(1)
{
if(poll(RTP/RTCP ports)) //a timeout value of 1
{
if(RTCP event)
RTCPhandler();
if(RTP event)
{
recvfrom(); //the normal socket api recvfrom
WritePacketToFile(record_fd)
{
//Create new record_fd after 100MB
}
}
}
}
}
even if it is alright to stick to the single threaded implementation why is the multithreaded approach behaving such a way(not recording after ~15 mins)..?

Resources