I've read some examples from the libcurl homepage. It always uses loop monitoring the multi handle when download through curl_multi_perform like below:
curl_multi_add_handle(multi_handle, easy_handle);
do {
curl_multi_wait(…);
curl_multi_perform(multi_handle, &still_running);
} while (still_running);
that make me block on the section of program
I want the libcurl will do callback after the anyone of easy_handle is download finished
for example:
Server can receive requests and parse the requests to multi_handle to
download asynchronously.
Server still can receive requests while multi_handle is downloading.
Those are independent(asynchronous in other words)
Typically curl_multi_perform is called in a loop to complete all the curl related task, like http transaction.
The way you have put the code, you would not achieve the asynchronous way of using libcurl. There are ways to achieve it.
In a typical implementation you will have your main loop, where you might be dealing with number of task. For example
do
{
execute task1
execute task2
.............
execute taskn
}
while(condition)
In that loop, you can call curl_multi_perform.
So main loop looks like
do
{
execute task1
execute task2
.............
execute taskn
curl_multi_perform(curlm, &count);
}
while(condition)
That way you will do all your task and curl_multi_perform is called time to time and you will achieve asynchronous way of using libcurl.
Please check documentation, depending on some return value you may avoid calling curl_multi_perform (I remember reading it previously).
Related
As I know, the start method is blocking method, it will block the code execution until the sequence is done.
Can someone explain why we need wait_for_sequence_state task?
This is my code snipped:
virtual task main_phase(uvm_phase phase);
phase.raise_objection(this,"Test Main Objection");
virt_seq1 = wb_conmax_virtual_sequence::type_id::create("wb_conmax_virtual_sequence",this);
virt_seq1.start(env.wb_conmax_virt_seqr,null);
virt_seq1.wait_for_sequence_state(UVM_FINISHED);
phase.drop_objection(this,"Dropping Test Main Objection");
endtask
the call to the wait_for_sequence_state is obsolete here. It will return immediately.
I'm using Camel Exec for automated shutdowns on some of our devices.
The shutdown command is pretty simple, and it mostly works fine:
from(START_DEEP_SLEEP)
.setBody(constant(null)) // we don't want stdin for exec
.setHeader(ExecBinding.EXEC_COMMAND_ARGS, constant("""shutdown $shutdownDelay "starting deep sleep shutdown" """))
.to("exec:sudo")
Obviously, this command will send a shutdown to the application executing it. That too isn't much of an issue, except that sometimes this produces an exit value of 143. I know the meaning of the return value, and it makes sense to see it here, but this only happens on some devices. Most others just return 0. They are all of the same type, so I really don't know where this discrepancy comes from, but it's not even that big an issue. The shutdown works none the less.
The problem is that camel exec logs this as an error:
ERROR 549 --- [Camel (camel-1) thread #1 - seda://start-deepsleep] o.a.camel.component.exec.ExecProducer : The command ExecCommand [args=[shutdown, now, starting deep sleep shutdown], executable=sudo, timeout=9223372036854775807, outFile=null, workingDir=null, useStderrOnEmptyStdout=false] returned exit value 143
This produces undesired noise in our monitoring, and I would rather not have it logged.
The core issue here is that Camel Exec does not throw, so there's no exception I could handle. It just logs the error, which then gets picked up by our log analysis.
I would like to handle that exit code gracefully without camel Exec logging an error. The return value is already logged separately anyways. How can I do that?
According to the docu http://camel.apache.org/exec.html there is a header ExecBinding.EXEC_EXIT_VALUE filled with the error number. Yours should be 143 (the docu states that this depends on the OS).
That could be a "hook" to handle the log entry, e.g. deleting the last entry with the same error number.
Of course this is only a cosmetic fix. The implementation could be like this:
from(START_DEEP_SLEEP)
.setBody(constant(null)) // we don't want stdin for exec
.setHeader(ExecBinding.EXEC_COMMAND_ARGS, constant("""shutdown $shutdownDelay "starting deep sleep shutdown" """))
.to("exec:sudo")
.when(header(ExecBinding.EXEC_EXIT_VALUE))
.to("direct:edit_the_log")
Please note that I did not test that code. Maybe u access that header with
.when(header(EXEC_EXIT_VALUE))
instead.
Please, inform me if that could be a proper solution or not.
Goal: allow c extension to receive block/proc for delayed execution while retaining current execution context.
I have a method in c (exposed to ruby) that accepts a callback (via VALUE hash argument) or a block.
// For brevity, lets assume m_CBYO is setup to make a CBYO module available to ruby
extern VALUE m_CBYO;
VALUE CBYO_add_callback(VALUE callback)
{
if (rb_block_given_p()) {
callback = rb_block_proc();
}
if (NIL_P(callback)) {
rb_raise(rb_eArgError, "either a block or callback proc is required");
}
// method is called here to add the callback proc to rb_callbacks
}
rb_define_module_function(m_CBYO, "add_callback", CBYO_add_callback, 1);
I have a struct I'm using to store these with some extra data:
struct rb_callback
{
VALUE rb_cb;
unsigned long long lastcall;
struct rb_callback *next;
};
static struct rb_callback *rb_callbacks = NULL;
When it comes time (triggered by an epoll), I iterate over the callbacks and execute each callback:
rb_funcall(cb->rb_cb, rb_intern("call"), 0);
When this happens I am seeing that it successfully executes the ruby code in the callback, however, it is escaping the current execution context.
Example:
# From ruby including the above extension
CBYO.add_callback do
puts "Hey now."
end
loop do
puts "Waiting for signal..."
sleep 1
end
When a signal is received (via epoll) I will see the following:
$> Waiting for signal...
$> Waiting for signal...
$> Hey now.
$> // process hangs
$> // Another signal occurs
$> [BUG] vm_call_cfunc - cfp consistency error
Sometimes, I can get more than one signal to process before the bug surfaces again.
I found the answer while investigating a similar issue.
As it turns out, I too was trying to use native thread signals (with pthread_create) which are not supported with MRI.
TLDR; the Ruby VM is not currently (at the time of writing) thread safe. Check out this nice write-up on Ruby Threading for a better overall understanding of how to work within these confines.
You can use Ruby's native_thread_create(rb_thread_t *th) which will use pthread_create behind the scenes. There are some drawbacks that you can read about in the documentation above the method definition. You can then run the callback with Ruby's rb_thread_call_with_gvl method. Also, I haven't done it here, but it might be a good idea to create a wrapper method so you can use rb_protect to handle exceptions the callback may raise (otherwise they will be swallowed by the VM).
VALUE execute_callback(VALUE callback)
{
return rb_funcall(callback, rb_intern("call"), 0);
}
// execute the callback when the thread receives signal
rb_thread_call_with_gvl(execute_callback, data->callback);
With both DKPy-SITL and our APM2 board, the wait_ready method is causing our program to raise an API Exception due to the command list (waypoints) taking too long to download. In the past (with droneapi) this wasn't an issue for me. Some waypoints are being downloaded, but the process takes about 10 seconds for each one, which leads me to believe something weird is going on.
Are there any ways to speed up the download process? I've posted the relevant code below.
self.vehicle = connect(connection_string, baud=baud_rate,
status_printer=dronekit_printer, wait_ready=True)
and later in another asynchronous method
def commands(self):
commands = self.vehicle.commands
commands.download()
commands.wait_ready()
return commands
The error occurs on commands.wait_ready(). There has to be a faster way to download commands than sitting there for over 30 seconds on an i7 4790k processor, especially since I've run the same code off a slower computer in the past with droneapi. If need be, I can raise an issue on the dronekit github as well.
I had the same issue. First time download call always goes well (0 commands). Once you have uploaded some commands the second time you try to download it fails ('Timeout' exception).
What I did to solve this was calling clear without download after the first time.
Something like this:
cmds = vehicle.commands
if not cmds.count > 0:
# Download
cmds.download()
# Wait until download is finished
cmds.wait_ready()
cmds.clear()
# Add / Modify the commands here and then upload them
I used to work in C, where threads are easy to create with a specific function I choose.
Now in tcl I can't use thread to start with a specific function I want, I tried this:
package require Thread
proc printme {aa} {
puts "$aa"
}
set abc "dasdasdas"
set pool [tpool::create -maxworkers 4 ]
# The list of *scripts* to evaluate
set tasks {
{puts "ThisisOK"}
{puts $abc}
{printme "1234"}
}
# Post the work items (scripts to run)
foreach task $tasks {
lappend jobs [tpool::post $pool $task]
}
# Wait for all the jobs to finish
for {set running $jobs} {[llength $running]} {} {
tpool::wait $pool $running running
}
# Get the results; you might want a different way to print the results...
foreach task $tasks job $jobs {
set jobResult [tpool::get $pool $job]
puts "TASK: $task"
puts "RESULT: $jobResult"
}
I always get:
Execution error 206: invalid command name "printme"invalid command name "printme"
while executing
"printme "1234""
invoked from within
"tpool::get $pool $job"
Why?
You problem is, that the Tcl threading model is very different from the one used in C. Tcl's model is a basically 'shared nothing by default' model mostly based on message passing.
So every thread in the pool is an isolated interpreter and does not know anything about a proc printme. You need to initialize those interpreters with the procs you need.
See the docs for the ::tpool::create command, it has an option to provide a -initcmd where you can define or package require the stuff you need.
So try this to initialize your threads:
set pool [tpool::create -maxworkers -initcmd {
proc printme {aa} {
puts "$aa"
}}]
https://www.tcl.tk/man/tcl/ThreadCmd/tpool.htm#M10
To answer your comment a bit more detailed:
No, there is no way to make Tcl threads work like C threads and share objects and procs freely. It is a fundamental design decision and allows Tcl to have an interpreter without a massive global lock (in contrast to e.g. CPython), as most things are thread local and use thread local storage.
But there are some ways to make initialization and use of multiple thread interpreters easier. One is the -initcmd parameter from ::tpool::create which allows you to run initialization code for every single interpreter in your pool without doing it manually. If all your code lives in a package, you simply add a package require and your interpreter is properly initialized.
If you really want to share state between multiple threads, you can use the ::tsv subcommands. It allows you to share arrays and other things between threads in an explict way. But under the hood it involves the typical locks and mutexes you might know from C to mediate access.
There is another set of commands in the thread package that allow you to make initialization easier. This is the ttrace command, which allows you to simply trace what gets executed in one interpreter and automatically repeat it in another interpreter. It is quite smart and only shares/copies the procs you really use to the target instead of loading all things upfront.