Interrupt handle is not called after added with g_unix_signal_add - c

I am learning about gstreamer and I have a simple program where I want to end a stream gracefully after pressing CTRL+C in Linux. For this, I've read the source code for gst-launch and saw that g_unix_signal_add() is used for adding the signals. In my code I've added the line:
signal_watch_intr_id = g_unix_signal_add (SIGINT,
(GSourceFunc) intr_handler, data.pipeline);
where data is a structure containing the pipeline. My handling function is:
static gboolean
intr_handler (gpointer user_data)
{
printf("handling interrupt.\n");
GstElement *pipeline = (GstElement *) user_data;
/* post an application specific message */
gst_element_post_message (GST_ELEMENT (pipeline),
gst_message_new_application (GST_OBJECT (pipeline),
gst_structure_new ("GstLaunchInterrupt",
"message", G_TYPE_STRING, "Pipeline interrupted", NULL)));
/* remove signal handler */
signal_watch_intr_id = 0;
return G_SOURCE_REMOVE;
}
I expect it to print "handling interrupt." to the console, make the pipeline send a GST_MESSAGE_APPLICATION type message which then will be handled to stop the pipeline. However, the application simply does nothing after a SIGINT now. Because it does NOTHING, I know it changes the default handle, but I don't understand why it does not call the handling function. Can someone help me do this handling?

As Philip's comments suggested, the problem was that I wasn't running a main loop and using gst_bus_timed_pop() to get the messages from the bus. By using a GMainLoop and adding a bus watcher(using gst_bus_add_watch) with a bus callback function, I managed to catch the interrupt as I wanted to.

Related

Problems with issuing at#httpsnd command over UART

I have a custom board that features a Telit LE910C1-EUX module communicating with an ESP32 processor through UART (the ESP32 module is programmed using the ESP-IDF v4.3.2 framework).
Here's the routine I have written to send commands to the Telit module:
esp_err_t TelitSerialCommand(const char* command) {
// Send command over UART
uart_write_bytes(UART_NUM_1, command, strlen(command));
if(uart_wait_tx_done(UART_NUM_1, 100) != ESP_OK) {
ESP_LOGE(TAG, "Could not send Telit command");
return ESP_ERR_TIMEOUT;
}
// Wait for two seconds
vTaskDelay(2000/portTICK_RATE_MS);
return ESP_OK;
}
While here's the routine I have written to read the Telit's response:
esp_err_t TelitSerialRead(char* RxBuf, size_t length) {
// First, clean current string holding past message
strncpy(RxBuf, "", length);
// Check how many data bytes are present in UART buffer
esp_err_t status = uart_get_buffered_data_len(UART_NUM_1, &length);
if (status != ESP_OK) {
ESP_LOGE(TAG, "Could not get UART rx buffer size");
return status;
}
// Read those bytes
length = uart_read_bytes(UART_NUM_1, RxBuf, length, 100);
// Clean UART buffer
uart_flush(UART_NUM_1);
return ESP_OK;
}
With these two routines, I am able to send commands to the Telit module and read its responses. When it comes to performing a POST operation using at#httpsnd, however, it seems like the command doesn't get through.
Here's a list of commands I use:
at+cmee=2\r
at#sgact=1,1\r
at#httpcfg=0,"www.httpbin.org",80,0,,,0\r
at#httpsnd=0,0,"/post",15,"application/json"\r
After prompting the latter command, the Telit should reply with >>> signaling that it's ready to read serial data; what I get instead is the same command I issue, as if it was not terminated and the module was currently waiting for the \r symbol. I finally get the >>> reply after another AT command is sent, but I am sure that the at#httpsnd command is terminated with the carriage return, so I am not sure why the Telit behaves like this. If I communicate with the Telit using minicom through USB (hence bypassing the ESP32 mcu) then all the commands above work. I can ping the 8.8.8.8 server so I know I have network connection, and the GET command AT#HTTPQRY=0,0,"/get"\r works just fine.
Have you ever dealt with a similar problem? Any help would be appreciated!
Turns out I had to disable the rs232 flow control by issuing the command AT&K0

How to play audio with gstreamer in C?

I'm trying to play audio with Gstreamer in C.
I'm using Laptop, Ubuntu 16.
To play I use this pipeline and it's working:
gst-launch-1.0 filesrc location=lambo-engine.mp3 ! decodebin ! audioconvert ! autoaudiosink
But when I convert it to C:
GstElement *pipeline, *source, *decode, *audioconvert, *sink;
GMainLoop *loop;
GstBus *bus;
GstMessage *msg;
int main(int argc, char *argv[]) {
/* Initialize GStreamer */
gst_init (&argc, &argv);
/* Create the elements */
source = gst_element_factory_make("filesrc", NULL);
decode = gst_element_factory_make("decodebin", NULL);
audioconvert = gst_element_factory_make("audioconvert", NULL);
sink = gst_element_factory_make("autoaudiosink", NULL);
// Set parameters for some elements
g_object_set(G_OBJECT(source), "location", "lambo-engine.mp3", NULL);
/* Create the empty pipeline */
pipeline = gst_pipeline_new ("pipeline");
/* Build the pipeline */
gst_bin_add_many (GST_BIN (pipeline), source, decode, audioconvert, sink, NULL);
if (gst_element_link_many(source, decode, audioconvert, sink, NULL) != TRUE){
g_error("Failed to link save elements!");
gst_object_unref (pipeline);
return -1;
}
/* Start playing */
gst_element_set_state (pipeline, GST_STATE_PLAYING);
bus = gst_element_get_bus (pipeline);
gst_object_unref (bus);
/* now run */
g_main_loop_run (loop);
/* Free pipeline */
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref (GST_OBJECT(pipeline));
return 0;
}
I can built it success. But when I run it, it return error can't link elements:
** (example:2516): ERROR **: 22:59:42.310: Failed to link save elements!
Trace/breakpoint trap (core dumped)
Someone helps me to figure out error, please.
Thank you so much
Gstreamer is centered around pipelines that are lists of elements. Elements have pads to exchange data on. In your example, decodebin has an output pad and audioconvert has an input pad. At the start of the pipeline, the pads need to be linked.
This is when pads agree on the format of data, as well as some other information, such as who's in charge of timing and maybe some more format specifics.
Your problem arises from the fact that decodebin is not actually a real element. At runtime, when filesrc starts up, it tells decodebin what pad it has, and decodebin internally creates elements to handle that file.
For example:
filesrc location=test.mp4 ! decodebin would run in this order:
delay linking because types are unknown
start filesrc
filesrc says "trying to link, I have a pad of format MP4(h264)
decodebin sees this request, and in turn, creates on the fly a h264 parse element that can handle the mp4 file
decodebin now has enough information to describe it's pads, and it links the rest of the pipeline
video starts playing
Because you are using c to do this, you link the pipeline before filesrc loads the file. This means that decodebin doesn't know the format of it's pads at startup, and therefore fails to link.
To fix this, you have two options:
1.) Swap out decodebin for something that supports only one type. If you know your videos will always be mp4s with h264, for example, you can use h264parse instead of decodebin. Because h264parse only works with one type of format, it knows it's pad formats at the start, and will be able to link without issue.
2.) Reimplement the smart delaying linking. You can read the docs for more info, but you can delay linking of the pipeline, and install callbacks to complete the linking when there's enough information. This is what gst-launch-1.0 is doing under the hood. This has the benefit of being more flexible: anything support by decodebin will work. The downside is that it's much more complex, involves a nontrivial amount of work on your end, and is more fragile. If you can get away with it, try fix 1

How can a Ruby C extension store a proc for later execution?

Goal: allow c extension to receive block/proc for delayed execution while retaining current execution context.
I have a method in c (exposed to ruby) that accepts a callback (via VALUE hash argument) or a block.
// For brevity, lets assume m_CBYO is setup to make a CBYO module available to ruby
extern VALUE m_CBYO;
VALUE CBYO_add_callback(VALUE callback)
{
if (rb_block_given_p()) {
callback = rb_block_proc();
}
if (NIL_P(callback)) {
rb_raise(rb_eArgError, "either a block or callback proc is required");
}
// method is called here to add the callback proc to rb_callbacks
}
rb_define_module_function(m_CBYO, "add_callback", CBYO_add_callback, 1);
I have a struct I'm using to store these with some extra data:
struct rb_callback
{
VALUE rb_cb;
unsigned long long lastcall;
struct rb_callback *next;
};
static struct rb_callback *rb_callbacks = NULL;
When it comes time (triggered by an epoll), I iterate over the callbacks and execute each callback:
rb_funcall(cb->rb_cb, rb_intern("call"), 0);
When this happens I am seeing that it successfully executes the ruby code in the callback, however, it is escaping the current execution context.
Example:
# From ruby including the above extension
CBYO.add_callback do
puts "Hey now."
end
loop do
puts "Waiting for signal..."
sleep 1
end
When a signal is received (via epoll) I will see the following:
$> Waiting for signal...
$> Waiting for signal...
$> Hey now.
$> // process hangs
$> // Another signal occurs
$> [BUG] vm_call_cfunc - cfp consistency error
Sometimes, I can get more than one signal to process before the bug surfaces again.
I found the answer while investigating a similar issue.
As it turns out, I too was trying to use native thread signals (with pthread_create) which are not supported with MRI.
TLDR; the Ruby VM is not currently (at the time of writing) thread safe. Check out this nice write-up on Ruby Threading for a better overall understanding of how to work within these confines.
You can use Ruby's native_thread_create(rb_thread_t *th) which will use pthread_create behind the scenes. There are some drawbacks that you can read about in the documentation above the method definition. You can then run the callback with Ruby's rb_thread_call_with_gvl method. Also, I haven't done it here, but it might be a good idea to create a wrapper method so you can use rb_protect to handle exceptions the callback may raise (otherwise they will be swallowed by the VM).
VALUE execute_callback(VALUE callback)
{
return rb_funcall(callback, rb_intern("call"), 0);
}
// execute the callback when the thread receives signal
rb_thread_call_with_gvl(execute_callback, data->callback);

Underlying mechanism when pausing a process

I have a program that requires it to pause and resume another program. To do this, I use the kill function, either from code with: -
kill(pid, SIGSTOP); // to pause
kill(pid, SIGCONT); // to resume
Or from the command line with the similar
kill -STOP <pid>
kill -CONT <pid>
I can trace what's going on with the threads using this code, taken from Mac OS X Internals.
If a program is paused and immediately resumed, the state of threads can show as UNINTERRUPTIBLE. Most of the time, they report as WAITING, which is not surprising and if a thread is doing work, it will show as RUNNING.
What I don't understand is when I pause a program and view the states of the threads, they still show as WAITING. I would have expected their state to be either STOPPED or HALTED
Can someone explain why they still show as WAITING and when would they be STOPPED or HALTED. In addition, is there another structure somewhere that shows the state of the program and its threads being halted in this way?
"Waiting" is shown in your case because you did not terminate the program rather paused it, where as Stopped or Halted state usually occurs when the program immediately stopped working due to some runtime error. As far as your second question is concerned, I do not think there is some other structure out there to show the state of the program. Cheers
After researching and experimenting with the available structures, I've discovered that it is possible to show the state of a program being halted by looking at the process suspend count. This is my solution: -
int ProcessIsSuspended(unsigned int pid)
{
kern_return_t kr;
mach_port_t task;
mach_port_t mytask;
mytask = mach_task_self();
kr = task_for_pid(mytask, pid, &task);
// handle error here...
char infoBuf[TASK_INFO_MAX];
struct task_basic_info_64 *tbi;
int infoSize = TASK_INFO_MAX;
kr = task_info(task, TASK_BASIC_INFO_64, (task_info_t)infoBuf, (mach_msg_type_number_t *)&infoSize);
// handle error here....
tbi = (struct task_basic_info_64 *) infoBuf;
if(tbi->suspend_count > 0) // process is suspended
return 1;
return 0;
}
If suspend_count is 0, the program is running, else it is in a paused state, waiting to be resumed.

Multithreaded JNI causing segfaults under load

I'm embedding a JVM into my webserver which has 4 worker threads that never die. The following code runs on each http request inside any one of 4 workers:
// normally I would do URL routing here first, but this is just a JNI test now
jclass cls;
jmethodID method;
jobjectArray args;
jclass stringClass;
jstring jstr;
(*jvm)->AttachCurrentThread (jvm, &env, NULL);
cls = (*env)->FindClass(env, "HelloWorldClass");
method = (*env)->GetStaticMethodID(env, cls, "main", "([Ljava/lang/String;)V");
jstr = (*env)->NewStringUTF(env, "Hello world!");
stringClass = (*env)->FindClass(env, "java/lang/String");
args = (*env)->NewObjectArray(env, 1, stringClass, jstr);
(*env)->CallStaticVoidMethod(env, cls, method, args);
When I step through with the debugger, it works. But when I put some load on it with weighttp benchmark, it randomly segfaults either on the FindClass() line or the CallSTaticVoidMethod() line. What could be the problem? I read through a lot of docs, I don't see how I would need to lock or free anything here.
This is pretty much the most basic JNI code that is possible, carried over from the official doc: http://java.sun.com/docs/books/jni/html/invoke.html
Looks like I have put JNIEnv* into global scope. While that itself shouldn't have caused a problem because it gets overwritten in each thread where it gets used, it appears as if JNI wants/needs to internally free it every time it's used. The wonders of API design!

Resources