GtkSpinner with long-lasting function with C - c

I'm making a GTK+3 application in C and I want a spinner to show when the program is processing the data. Here's what I generally have:
main()
{
//Some statements
g_signal_connect(G_OBJECT(btnGenerate), "clicked", G_CALLBACK(Generate), &mainform);
}
void Generate(GtkWidget *btnGenerate, form_widgets *p_main_form)
{
gtk_spinner_start(GTK_SPINNER(p_main_form->spnProcessing));
Begin_Lengthy_Processing(Parameters, Galore, ...);
//gtk_spinner_stop(GTK_SPINNER(p_main_form->spnProcessing));
}
I have the stop function commented out so I can see the spinner spin even after the function has finished, but the spinner starts after the function is finished, and I suspect it turns on in the main loop.
I also found out that the entire interface freezes during the execution of the long going function.
Is there a way to get it to start and display inside the callback function? I found the same question, but it uses Python and threads. This is C, not Python, so I would assume things are different.

You need to run your lengthy computation in a separate thread, or break it up into chunks and run each of them separately as idle callbacks in the main thread.
If your lengthy computation takes a single set of inputs and doesn’t need any more inputs until it’s finished, then you should construct it as a GTask and use g_task_run_in_thread() to start the task. Its result will be delivered back to the main thread via the GTask’s GAsyncReadyCallback. There’s an example here.
If it takes more input as it progresses, you probably want to use a GAsyncQueue to feed it more inputs, and a GThreadPool to provide the threads (amortising the cost of creating threads over multiple calls to the lengthy function, and protecting against denial of service).
The GNOME developer docs give an overview of how to do threading.

This is what I got:
int main()
{
// Statements...
g_signal_connect(G_OBJECT(btnGenerate), "clicked", G_CALLBACK(Process), &mainform);
// More statements...
}
void Process(GtkWidget *btnGenerate, form_widgets *p_main_form)
{
GError *processing_error;
GThread *start_processing;
gtk_spinner_start(GTK_SPINNER(p_main_form->spnProcessing));
active = true;
if((start_processing = g_thread_try_new(NULL, (GThreadFunc)Generate, p_main_form, &processing_error)) == NULL)
{
printf("%s\n", processing_error->message);
printf("Error, cannot create thread!?!?\n\n");
exit(processing_error->code);
}
}
void Generate(form_widgets *p_main_form)
{
// Long process
active = false;
}
My program, once cleaned up and finished, as there are many other bugs in the program, will be put on GitHub.
Thank you all for your help. This answer comes from looking at all of your answers and comments as well as reading some more documentation, but mostly your comments and answers.

I did something similar in my gtk3 program. It's not that difficult. Here's how I would go about it.
/**
g_idle_add_full() expects a pointer to a function with the signature below:
(*GSourceFunc) (gpointer user_data).
So your function signature must adhere to that in order to be called.
But you might want to pass variables to the function.
If you don't want to have the variables in the global scope
then you can do this:
typedef struct myDataType {
char* name;
int age;
} myDataType;
myDataType person = {"Max", 25};
then when calling g_idle_add_full() you do it this way:
g_idle_add_full(G_PRIORITY_HIGH_IDLE, myFunction, person, NULL);
*/
int main()
{
// Assumming there exist a pointer called data
g_idle_add_full(G_PRIORITY_HIGH_IDLE, lengthyProcessCallBack, data, NULL);
// GTK & GDK event loop continues and window should be responsive while function runs in background
}
gboolean lengthyProcessCallBack(gpointer data)
{
myDataType person = (myDataType) *data;
// Doing lenghthy stuff
while(;;) {
sleep(3600); // hypothetical long process :D
}
return FALSE; // removed from event sources and won't be called again.
}

Related

How to use napi_threadsafe_function for NodeJS Native Addon

I've been looking through the NAPI documentation to try and understand how it deals with multithreading. According to the documentation napi_create_threadsafe_function() and napi_call_threadsafe_function() are used to create and call js functions from multiple threads. The issue is that the documentation is not that straight forward, and there are no examples and I can't find any anywhere else.
If anyone has any experience using napi_create_threadsafe_function() and napi_call_threadsafe_function() or know where to find examples of them being used. Please if you could help out with a a basic example so I can just understand how to use them correctly.
I'm writting a C addon not C++ and need to use these functions. I am not using the wrapper node-addon-api, but napi directly
As a summery tag we may say, the N-API ThreadSafeFunctions acts as a safe tunnel between the asynchronous C/C++ code executing on a worker thread and the JavaScript layer for information exchange.
Before going technical let us consider a scenario that we have a very long running process heavy task to be completed. We all know putting this task on node.js main thread is not a good choice, it will chock the event loop and block all other task in the queue. So a good choice could be to consider this task in a separate thread (let us call this thread as a worker thread). JavaScript asynchronous callback and Promise are doing exactly this approach.
Let us say we have deployed the task on a worker thread and we are ready with a portion of result and we would like it to be send to JavaScript layer. Then the process involve are, converting the result into napi_value and then call the Callback JavaScript function from C/C++. Unfortunately neither of the operation can be performed from a worker thread; these operations should be exclusively done from the main thread. The JavaScript Promise and Callback, wait till the task completion and then switch over to main thread along with the task result in a normal C/C++ storage facility such as structure etc. Then do the napi_value conversion and call the JavaScript callback function from the main thread.
Since our task is extremely long running probably we don't want to wait till the end of the task before exchanging the result with JavaScript layer.
Let us consider a scenario where we are searching objects in a very large video where we prefer to get the detected objects send to JavaScript layer as on when it is found.
In such a scenario we will have to start sending task result while the task is still in progress. This is the scenario where Asynchronous Thread-safe Function Calls come for our help. It acts as a safe tunnel between the worker thread and the JavaScript layer for information exchange. Let us consider the following function snippet
napi_value CAsyncStreamSearch(napi_env env, napi_callback_info info)
{
// The native addon function exposed to JavaScript
// This will be the funciton a node.js application calling.
}
void ExecuteWork(napi_env env, void* data)
{
// We will use this function to get the task done.
// This code will be executed on a worker thread.
}
void OnWorkComplete(napi_env env, napi_status status, void* data)
{
// after the `ExecuteWork` function exits, this
// callback function will be called on the main thread
}
void ThreadSafeCFunction4CallingJS(napi_env env, napi_value js_cb,
void* context, void* data)
{
// This funcion acts as a safe tunnel between the asynchronous C/C++ code
// executing the worker thread and the JavaScript layer for information exchange.
}
In this first three functions are nearly same as JavaScript Promise and Callback that we are familiar with. The fourth one is specifically for the Asynchronous Thread-safe Function Calls. In this, our long running task is being executed by ExecuteWork() function on a worker thread. Let us say it has instructed us not to call JavaScript (and also any napi_value conversion of result) from ExecuteWork() but permitted to do so from ThreadSafeCFunction4CallingJS as long as we are calling ThreadSafeCFunction4CallingJS with an napi equivalent of C/C++ function pointer. Then we could pack the JavaScript calls inside this ThreadSafeCFunction4CallingJS() function. Then when ExecuteWork() function could pass the result to ThreadSafeCFunction4CallingJS() while it is being invoked in a plain C/C++ storage units such as structure etc. The ThreadSafeCFunction4CallingJS() convert this result to napi_value and call JavaScript function.
Under the cover the ThreadSafeCFunction4CallingJS() function is being queue to the event loop, and eventually it get executed by main thread.
The following code snippet packed inside CAsyncStreamSearch() is responsible for creating a C/C++ function pointer equivalent of N-API by usng napi_create_threadsafe_function() and it is being done from the native addon's main thread itself. Similarly the request for creation of worker thread by using napi_create_async_work() function then placing the work int the event queue by using napi_queue_async_work() so that a worker thread will pickup this item in the future.
napi_value CAsyncStreamSearch(napi_env env, napi_callback_info info)
{
-- -- -- --
-- -- -- --
// Create a thread-safe N-API callback function correspond to the C/C++ callback function
napi_create_threadsafe_function(env,
js_cb, NULL, work_name, 0, 1, NULL, NULL, NULL,
ThreadSafeCFunction4CallingJS, // the C/C++ callback function
// out: the asynchronous thread-safe JavaScript function
&(async_stream_data_ex->tsfn_StreamSearch));
// Create an async work item, that can be deployed in the node.js event queue
napi_create_async_work( env, NULL,
work_name,
ExecuteWork,
OnWorkComplete,
async_stream_data_ex,
// OUT: THE handle to the async work item
&(async_stream_data_ex->work_StreamSearch);)
// Queue the work item for execution.
napi_queue_async_work(env, async_stream_data_ex->work_StreamSearch);
return NULL;
}
Then during the asynchronous execution of task (ExecuteWork() function) invokes ThreadSafeCFunction4CallingJS() by calling napi_call_threadsafe_function() function as shown bellow.
static void ExecuteWork(napi_env env, void *data)
{
// tsfn is napi equivalent of point to ThreadSafeCFunction4CallingJS
// function that we created at CAsyncStreamSearch function
napi_acquire_threadsafe_function( tsfn )
Loop
{
// this will eventually invoke ThreadSafeCFunction4CallingJS()
// we may call any number of time (in fact it can be called from any thread)
napi_call_threadsafe_function( tsfn, WorkResult, );
}
napi_release_threadsafe_function( tsfn,);
}
The example you pointed out is one of the best source of information and it is directly form node.js team itself. When I was learning this concept I too was referring the same example, during my study the example has been recreated by extracting original idea from it, hope you may find this much simplified. and it is available at
https://github.com/msatyan/MyNodeC/blob/master/src/mync1/ThreadSafeAsyncStream.cpp
https://github.com/msatyan/MyNodeC/blob/master/test/ThreadSafeAsyncStream.js
If anyone else gets stuck with this issue. I finally managed to hunt down an example here.
Once I understand it better and have gotten a working sample, I will update here. Hopefully someone needing this in the future will have an easier time than me.
See Satyan's answer
The solution from this site worked for me here
struct ThreadCtx {
ThreadCtx(Napi::Env env) {};
std::thread nativeThread;
Napi::ThreadSafeFunction tsfn;
};
void Target::Connect(const Napi::CallbackInfo& info) {
Napi::Env env = info.Env();
threadCtx = new ThreadCtx(env);
// Create a ThreadSafeFunction
threadCtx->tsfn = Napi::ThreadSafeFunction::New(env, info[0].As<Napi::Function>(), "Resource Name", 0 /* Unlimited queue */, 1 /* Only 1 thread */, threadCtx,
[&]( Napi::Env, void *finalizeData, ThreadCtx *context ) {
printf("Thread cleanup\n");
threadCtx->nativeThread.join();
},
(void*)nullptr
);
// Create a native thread
threadCtx->nativeThread = std::thread([&] {
auto callback = [](Napi::Env env, Napi::Function cb, char* buffer) {
cb.Call({Napi::String::New(env, buffer)});
};
char reply[1024];
memset(reply, 0, sizeof(reply));
while(true)
{
size_t reply_length = boost::asio::read(s, boost::asio::buffer(reply, sizeof(reply)));
if(reply_length <= 0) {
printf("Bad read from boost asio\n");
break;
}
// Callback (blocking) to JS
napi_status status = threadCtx->tsfn.BlockingCall(reply, callback);
if (status != napi_ok)
{
// Handle error
break;
}
}
// Release the thread-safe function
threadCtx->tsfn.Release();
});
}
addon.cc - (tested and 100% working)
#include <napi.h>
Napi::Value SAFE_THREAD(const Napi::CallbackInfo& info) {
std::thread([](Napi::ThreadSafeFunction tsfn){
struct output_data{
int arg1;
std::string arg2;
};
auto data = new output_data();
///---------------
///fill output data
data->arg1=1;
data->arg2="string data";
std::this_thread::sleep_for(std::chrono::milliseconds(2000));
///---------------
///output thread result to nodejs
napi_status status = tsfn.BlockingCall(data,[](Napi::Env env, Napi::Function jsCallback,output_data* data){
jsCallback.Call({Napi::Number::New(env, data->arg1), String::New(env, data->arg2)});
delete data;
});
if(status != napi_ok) { std::cout << "error!" << "\n"; }
tsfn.Release();
},Napi::ThreadSafeFunction::New(info.Env(), info[0].As<Function>(), "TSFN", 0, 1,[](Napi::Env env, void *finalizeData){},(void *)nullptr)).detach();
return info.Env().Null();
}
index.js
const ADDON = require('./THREAD/build/Release/addon');
function time_sec(){return (new Date()).getTime()/1000;}
var t = time_sec();
ADDON.SAFE_THREAD((arg1,arg2)=>{
console.log(time_sec()-t, 'arg1 = '+arg1)
console.log(time_sec()-t, 'arg2 = '+arg2)
});
console.log(time_sec()-t, 'fin')
output:
0.00099992752075 fin
2.00499987602233 arg1 = 1
2.00600004196167 arg2 = string data
see also how to emit data from thread

How to implement separate static variables for a given function called with different parameters

I am currently working on a microcontroller project in C that requires several timed functions to take place. I am a using a hardware timer to produce an interrupt every millisecond, and variables as counters to produce the appropriate time intervals. The hardware details are not important.
As an example, for a particular function, the following code would be executed on every tick of the 1ms counter, resulting in Function() being called every 10ms.
if (count < 10)
{
count++;
}
else
{
Function();
count = 0;
}
I would like to implement a wrapper function to avoid rewriting the counter code for every different interval, i.e:
TimerWrapper(required interval 1, Function 1 pointer)
TimerWrapper(required interval 2, Function 2 pointer)
and call it on every tick of the timer. For this to work, each different function called by the wrapper needs to have a separate count variable that needs to persist between calls of TimerWrapper(). I would like to keep all of the implementation details separate from my main program and introduce as few variables into my main program as possible.
Is it possible to do what I am asking, or is there a simpler/better way to achieve the same effect? I know how I would do this with an object oriented language but my skills are lacking in plain C.
I would think in terms of a structure along the lines of:
struct interrupt_responder
{
int count;
int rep_count;
void (*handler)(void);
};
You then create as many such structures as you have different counters, appropriately initialized:
static struct interrupt_responder c1 = { 0, 10, Function };
You arrange to call a function with the responder:
void time_wrapper(struct interrupt_responder *r)
{
if (++r->count >= r->max_count)
{
r->count = 0;
(*r->handler)();
}
}
The function called in response to an interrupt then simply needs to know how to dispatch calls to time_wrapper with the appropriate argument each time.
void millisecond_interrupt_handler(void)
{
time_wrapper(&c1);
…
}
Or you can have an array of the interrupt responders, and the millisecond interrupt handler can loop over the array, calling the time wrapper function.
You would have a set of file scope variables for the interrupt responders, but they'd not need to be visible outside the file. If you need different argument lists for the handler functions, life is trickier — avoid that if you possibly can. However, it seems from a comment that it won't be a problem — the embedded functions pointers will always be void (*handler)(void).

Run a specific set of lines within a function in a different thread in C

I have a huge function (length>4000) lines. In this function, I have more than 100 variables declared in the beginning. Now, I want to run a specific block of lines in a different thread. For example, I want to run lines 2000-3000 in a different thread. How do I do this?
To scale down the example, this is what I have:
int functionA()
{
.....variables declared......
.....variables declared......
printf("hello");
printf("this");
printf("is in another");
printf("thread");
}
I want to run the 4 printf functions in another thread.
To do this, this is what I've currently done:
int functionA()
{
.....variables declared......
.....variables declared......
void functionB()
{
printf("hello");
printf("this");
printf("is in another");
printf("thread");
}
pthread_create(&tid, NULL, functionB, NULL);
pthread_join(tid, NULL);
}
I know this is a terrible way to do this. However, there are too many variables to pass in case I want to make functionB a new independent function.
Please let me know how to proceed.
What I would do in your case is: Create a struct containing all the needed variables. Then create a new function with a pointer to that struct as parameter. Then you can create a new thread using that function and you would only have to pass that struct. Also the struct creation will be coded very fast, you just have to put
struct nameforstruct {
//declare vars here, e.g.:
int somevar;
}
around it and change your access to the vars by copy-pasting structname-> in front of it.
Function may then look like:
void threadingStuff(struct nametostruct * myvars) {
if (myvars->somevar == 1) {
// do stuff
}
}
That would be in my opinion the fastest way to achieve what you want (and the way with the lessest work). But I would really consider refactoring this to some better approach...

calling IO Operations from thread in ruby c extension will cause ruby to hang

I have a problem with using threads in a C Extension to run ruby code async.
I have the following C code:
struct DATA {
VALUE callback;
pthread_t watchThread;
void *ptr;
};
void *executer(void *ptr) {
struct DATA *data = (struct DATA *) ptr;
char oldVal[20] = "1";
char newVal[20] = "1";
pthread_cleanup_push(&threadGarbageCollector, data);
while(1) {
if(triggerReceived) {
rb_funcall(data->callback, rb_intern("call"), 0);
}
}
pthread_cleanup_pop(1);
return NULL;
}
VALUE spawn_thread(VALUE self) {
VALUE block;
struct DATA *data;
Data_Get_Struct(self, struct DATA, data);
block = rb_block_proc();
data->callback = block;
pthread_create(&data->watchThread, NULL, &executer, data);
return self;
}
I am using this because I want to provide ruby-code as a callback, which will be executed, once the Thread receives a signal.
In general this is working fine, if the callback is something like this ruby-code:
1 + 1
But, if the callbacks ruby-code looks like this:
puts "test"
than the main ruby process will stop responding, once the callback is getting executed.
The thread is still running and able to react to signals and puts the "test" everytime, the thread receives a message.
Can somebody maybe tell me, how to fix this?
Thanks a lot
From the Ruby C API docs:
As of Ruby 1.9, Ruby supports native 1:1 threading with one kernel
thread per Ruby Thread object. Currently, there is a GVL (Global VM
Lock) which prevents simultaneous execution of Ruby code which may be
released by the rb_thread_call_without_gvl and
rb_thread_call_without_gvl2 functions. These functions are
tricky-to-use and documented in thread.c; do not use them before
reading comments in thread.c.
TLDR; the Ruby VM is not currently (at the time of writing) thread safe. Check out this nice write-up on Ruby Threading for a better overall understanding of how to work within these confines.
You can use Ruby's native_thread_create(rb_thread_t *th) which will use pthread_create behind the scenes. There are some drawbacks that you can read about in the documentation above the method definition. You can then run your callback with Ruby's rb_thread_call_with_gvl method. Also, I haven't done it here, but it might be a good idea to create a wrapper method so you can use rb_protect to handle exceptions your callback may raise (otherwise they will be swallowed by the VM).
VALUE execute_callback(VALUE callback)
{
return rb_funcall(callback, rb_intern("call"), 0);
}
// execute your callback when the thread receives signal
rb_thread_call_with_gvl(execute_callback, data->callback);

Extending scope of local variables in C over function calls

I have a library which provides function calls to a user as below:
int* g_ID = NULL;
void processing(int p1, char p2)
{
int ID = newID();
g_ID = &ID;
callback(p1, p2);
return ID;
}
void SendResponse()
{
sendID(*g_ID);
}
The user sets up its application by registering its callback function with the signature void (f*)(int p1, char p2) and should not have knowledge about the ID used internally the library. So the user space code looks something like:
main()
{
RegisterCallback(HandleRequest);
while (inProgress())
sleep(1); /* just sleep here */
}
void (HandleRequest*)(int val1, char val2)
{
/* ... do something user specific ... */
SendResponse();
return;
}
The problem here is, that the library (handling IDs and g_ID is not thread safe) !! User's callback is invoked asynchronously by other library functions, as threads. Several threads can be executed this way in parallel. But I won't give the user visibility of library internal IDs.
I know the code snippets above are not perfect. There're just to demonstrate my intention ... SendResponse() is not yet implemented ;-).
I hope, someone can give some ideas how to "implement" SendResponse() and to keep thread safety.
You could use a threadlocal here to keep the g_ID, rather than making using a global. This will work in the scenario, as I understand it, that there may be multiple concurrent calls to process() from different threads, but that the process() method is as shown - that the SendResponse() call will only occur within the scope (runtime scope, not lexical) of the callback() method. That is true in the code shown. It could be untrue if HandleRequest did something exotic like kick off another thread an then return (but you could certainly ban that by documentation).
The other, more classic, approach is to encapsulate all the state you care about, like g_ID, into a void *, or opaque_state * or whatever, that you pass to the callback, and then methods like SendRespose() take that as an argument. If you don't like void * you can implement the opaque_state * version without exposing any details of that structure using a forward declaration.

Resources