Why we need wait_for_sequence_state task? - uvm

As I know, the start method is blocking method, it will block the code execution until the sequence is done.
Can someone explain why we need wait_for_sequence_state task?
This is my code snipped:
virtual task main_phase(uvm_phase phase);
phase.raise_objection(this,"Test Main Objection");
virt_seq1 = wb_conmax_virtual_sequence::type_id::create("wb_conmax_virtual_sequence",this);
virt_seq1.start(env.wb_conmax_virt_seqr,null);
virt_seq1.wait_for_sequence_state(UVM_FINISHED);
phase.drop_objection(this,"Dropping Test Main Objection");
endtask

the call to the wait_for_sequence_state is obsolete here. It will return immediately.

Related

How can a Ruby C extension store a proc for later execution?

Goal: allow c extension to receive block/proc for delayed execution while retaining current execution context.
I have a method in c (exposed to ruby) that accepts a callback (via VALUE hash argument) or a block.
// For brevity, lets assume m_CBYO is setup to make a CBYO module available to ruby
extern VALUE m_CBYO;
VALUE CBYO_add_callback(VALUE callback)
{
if (rb_block_given_p()) {
callback = rb_block_proc();
}
if (NIL_P(callback)) {
rb_raise(rb_eArgError, "either a block or callback proc is required");
}
// method is called here to add the callback proc to rb_callbacks
}
rb_define_module_function(m_CBYO, "add_callback", CBYO_add_callback, 1);
I have a struct I'm using to store these with some extra data:
struct rb_callback
{
VALUE rb_cb;
unsigned long long lastcall;
struct rb_callback *next;
};
static struct rb_callback *rb_callbacks = NULL;
When it comes time (triggered by an epoll), I iterate over the callbacks and execute each callback:
rb_funcall(cb->rb_cb, rb_intern("call"), 0);
When this happens I am seeing that it successfully executes the ruby code in the callback, however, it is escaping the current execution context.
Example:
# From ruby including the above extension
CBYO.add_callback do
puts "Hey now."
end
loop do
puts "Waiting for signal..."
sleep 1
end
When a signal is received (via epoll) I will see the following:
$> Waiting for signal...
$> Waiting for signal...
$> Hey now.
$> // process hangs
$> // Another signal occurs
$> [BUG] vm_call_cfunc - cfp consistency error
Sometimes, I can get more than one signal to process before the bug surfaces again.
I found the answer while investigating a similar issue.
As it turns out, I too was trying to use native thread signals (with pthread_create) which are not supported with MRI.
TLDR; the Ruby VM is not currently (at the time of writing) thread safe. Check out this nice write-up on Ruby Threading for a better overall understanding of how to work within these confines.
You can use Ruby's native_thread_create(rb_thread_t *th) which will use pthread_create behind the scenes. There are some drawbacks that you can read about in the documentation above the method definition. You can then run the callback with Ruby's rb_thread_call_with_gvl method. Also, I haven't done it here, but it might be a good idea to create a wrapper method so you can use rb_protect to handle exceptions the callback may raise (otherwise they will be swallowed by the VM).
VALUE execute_callback(VALUE callback)
{
return rb_funcall(callback, rb_intern("call"), 0);
}
// execute the callback when the thread receives signal
rb_thread_call_with_gvl(execute_callback, data->callback);

anyway to avoid to loop monitor multi handle in libcurl?

I've read some examples from the libcurl homepage. It always uses loop monitoring the multi handle when download through curl_multi_perform like below:
curl_multi_add_handle(multi_handle, easy_handle);
do {
curl_multi_wait(…);
curl_multi_perform(multi_handle, &still_running);
} while (still_running);
that make me block on the section of program
I want the libcurl will do callback after the anyone of easy_handle is download finished
for example:
Server can receive requests and parse the requests to multi_handle to
download asynchronously.
Server still can receive requests while multi_handle is downloading.
Those are independent(asynchronous in other words)
Typically curl_multi_perform is called in a loop to complete all the curl related task, like http transaction.
The way you have put the code, you would not achieve the asynchronous way of using libcurl. There are ways to achieve it.
In a typical implementation you will have your main loop, where you might be dealing with number of task. For example
do
{
execute task1
execute task2
.............
execute taskn
}
while(condition)
In that loop, you can call curl_multi_perform.
So main loop looks like
do
{
execute task1
execute task2
.............
execute taskn
curl_multi_perform(curlm, &count);
}
while(condition)
That way you will do all your task and curl_multi_perform is called time to time and you will achieve asynchronous way of using libcurl.
Please check documentation, depending on some return value you may avoid calling curl_multi_perform (I remember reading it previously).

tcl multithreading like in c, having hard time to execute thread with procedure

I used to work in C, where threads are easy to create with a specific function I choose.
Now in tcl I can't use thread to start with a specific function I want, I tried this:
package require Thread
proc printme {aa} {
puts "$aa"
}
set abc "dasdasdas"
set pool [tpool::create -maxworkers 4 ]
# The list of *scripts* to evaluate
set tasks {
{puts "ThisisOK"}
{puts $abc}
{printme "1234"}
}
# Post the work items (scripts to run)
foreach task $tasks {
lappend jobs [tpool::post $pool $task]
}
# Wait for all the jobs to finish
for {set running $jobs} {[llength $running]} {} {
tpool::wait $pool $running running
}
# Get the results; you might want a different way to print the results...
foreach task $tasks job $jobs {
set jobResult [tpool::get $pool $job]
puts "TASK: $task"
puts "RESULT: $jobResult"
}
I always get:
Execution error 206: invalid command name "printme"invalid command name "printme"
while executing
"printme "1234""
invoked from within
"tpool::get $pool $job"
Why?
You problem is, that the Tcl threading model is very different from the one used in C. Tcl's model is a basically 'shared nothing by default' model mostly based on message passing.
So every thread in the pool is an isolated interpreter and does not know anything about a proc printme. You need to initialize those interpreters with the procs you need.
See the docs for the ::tpool::create command, it has an option to provide a -initcmd where you can define or package require the stuff you need.
So try this to initialize your threads:
set pool [tpool::create -maxworkers -initcmd {
proc printme {aa} {
puts "$aa"
}}]
https://www.tcl.tk/man/tcl/ThreadCmd/tpool.htm#M10
To answer your comment a bit more detailed:
No, there is no way to make Tcl threads work like C threads and share objects and procs freely. It is a fundamental design decision and allows Tcl to have an interpreter without a massive global lock (in contrast to e.g. CPython), as most things are thread local and use thread local storage.
But there are some ways to make initialization and use of multiple thread interpreters easier. One is the -initcmd parameter from ::tpool::create which allows you to run initialization code for every single interpreter in your pool without doing it manually. If all your code lives in a package, you simply add a package require and your interpreter is properly initialized.
If you really want to share state between multiple threads, you can use the ::tsv subcommands. It allows you to share arrays and other things between threads in an explict way. But under the hood it involves the typical locks and mutexes you might know from C to mediate access.
There is another set of commands in the thread package that allow you to make initialization easier. This is the ttrace command, which allows you to simply trace what gets executed in one interpreter and automatically repeat it in another interpreter. It is quite smart and only shares/copies the procs you really use to the target instead of loading all things upfront.

Python: How to pause a while True loop?

So I'm making a python version of cookie clicker :D
To add the cps to the cookies counter, I use this code:
while True:
cps = b1*1 + b2*5 + b3*10 + b4*20 + b5*25
c=c+cps
time.sleep(60)
print('you now have %s cookies' %c)
note: the b1, b2 etc and the amounts of different cookie producers
Problem is, the time.sleep pauses the whole script, not just the while loop you can see above
BTW this is my first post, sorry if I did something wrong :/
Thanks for reading this :P
You can make a method for this loop then start a new thread for this after that you can just simply cal .sleep() which will only make that thread suspend for the given time not the whole code.
Check the Link it will help about multi-threading.
import threading
def worker():
"""your code in this method"""
t = threading.Thread(target=worker)
t.start()
AS far as i can see you have two options:
You measure the time between each call to update the cookieamount c and multiply it accordingly.
You make a new thread. Then sleep will just pause your thread to update your cookieamount.

Where can I find documentation on uv_poll_init?

I'm looking at a libuv example at https://github.com/benfleis/samples/blob/master/libuv/stdio/stdio_poll.c and trying to understand it.
I mostly understand it, but I'm having some trouble with the uv_poll_init's at the bottom, and I can't find any documentation.
Can someone point me to some documentation on it?
Thanks!
The official documentation is in the form of comments in the include/uv.h header file, which provides the following documentation for uv_poll_init():
Initialize the poll watcher using a file descriptor.
However, some better documentation that covers the concept of watchers, can be found here. In short:
uv_poll_init(loop, &stdin_watcher, STDIN_FILENO);
Initializes stdin_watcher to observe STDIN_FILENO. When the watcher is started, all of its callbacks will be invoked within the context of loop.
Here is a basic pseudo-flow of the program:
stdout_cb:
write whatever is in log_buf to stdout
stop listening for when I can write to stdout
log:
write message to log_buf
have stdout_watcher listen for when its file becomes writeable
when it becomes writable, stdout_cb will be called
stdint_cb:
read from stdin
call log
set_non_blocking:
set file descriptor as non-blocking
main:
set stdin/out to nonblocking
get handle to default event loop
initialize stdint_watcher, it will listen to stdin and its callback will
run within the default loop
initialize stdout_watcher, it will listen to stdout and its callback will
run within the default loop
have stdin_watcher listen for when its file becomes readable
when it becomes readable, stdin_cb will be called
run the event loop until no more work exists
in this case, when both watchers are not running (i.e. stopped)
Newest and latest and greatest documentation: http://docs.libuv.org/en/latest/

Resources