Get response from sequence to control virtual sequence - uvm

I need to verify a function which will take an unknown number of cycles to complete. I can determine that it is done by reading some registers and comparing their values to a model.
I have a sequence extended from uvm_reg_sequence which does this checking. I need this sequence to run at the end of my virtual sequence, and if the check fails, loop back to the beginning of the virtual sequence to run some more cycles. I will repeat this until the check passes (or I hit some timeout).
What I think I need is a way for the virtual sequence to get a response from the checker sequence to control this loop. What is the recommended way for accomplishing this?

The simplest thing I can think of is a simple check_passed field inside your register sequence":
class some_reg_sequence extends uvm_reg_sequence;
bit check_passed;
task body();
some_reg.read();
if (<pass_condition>)
check_passed = 1;
endtask
endclass
The virtual sequence would just check this field after executing the register sequence:
class virtual_sequence extends uvm_sequence;
task body();
some_reg_sequence reg_sequence;
`uvm_create_on(reg_sequence, ...)
// do stuff
// ...
do
reg_sequence.start();
while (!reg_sequence.check_passed);
endtask
endclass
You can also implement a timeout by wrapping the do..while inside a fork...join_any, together with a wait statement.

Related

Flink: An abstraction that implements CheckpointListener w/o element processing

I'm new to Flink and am looking for a way to run some code once a checkpoint completes (supposedly by implementing CheckpointListener) without processing events (void processElement(StreamRecord<IN> element)). Currently, I have an operator MyOperator that runs my code within notifyCheckpointComplete function. However, I see a lot of traffic sent to that operator. The operator chain looks as follows:
input = KafkaStream
input -> IcebergSink
input -> MyOperator
I can't find how to register CheckpointListener in Flink execution environment. Is it possible?
Also, I have the following ideas:
map input stream elements to Void, Unit before sending to MyOperator
use Side Output without emitting data to side output. I'm wondering if notifyCheckpointComplete will be still called.

Timer to represent AI reaction times

I'm creating a card game in pygame for my college project, and a large aspect of the game is how the game's AI reacts to the current situation. I have a function to randomly generate a number within 2 parameters, and this is how long I want the program to wait.
All of the code on my ai is contained within an if statement, and once called I want the program to wait generated amount of time, and then make it's decision on what to do.
Originally I had:
pygame.time.delay(calcAISpeed(AIspeed))
This would work well, if it didn't pause the rest of the program whilst the AI is waiting, stopping the user from interacting with the program. This means I cannot use while loops to create my timer either.
What is the best way to work around this without going into multi-threading or other complex solutions? My project is due in soon and I don't want to make massive changes. I've tried using pygame.time.Clock functions to compare the current time to the generated one, but resetting the clock once the operation has been performed has proved troublesome.
Thanks for the help and I look forward to your input.
The easiest way around this would be to have a variable within your AI called something like "wait" and set it to a random number (of course it will have to be tweaked to your program speed... I'll explain in the code below.). Then in your update function have a conditional that waits to see if that wait number is zero or below, and if not subtract a certain amount of time from it. Below is a basic set of code to explain this...
class AI(object):
def __init__(self):
#put the stuff you want in your ai in here
self.currentwait = 100
#^^^ All you need is this variable defined somewhere
#If you want a static number as your wait time add this variable
self.wait = 100 #Your number here
def updateAI(self):
#If the wait number is less than zero then do stuff
if self.currentwait <= 0:
#Do your AI stuff here
else:
#Based on your game's tick speed and how long you want
#your AI to wait you can change the amount removed from
#your "current wait" variable
self.currentwait -= 100 #Your number here
To give you an idea of what is going on above, you have a variable called currentwait. This variable describes the time left the program has to wait. If this number is greater than 0, there is still time to wait, so nothing will get executed. However, time will be subtracted from this variable so every tick there is less time to wait. You can control this rate by using the clock tick rate. For example, if you clock rate is set to 60, then you can make the program wait 1 second by setting currentwait to 60 and taking 1 off every tick until the number reaches zero.
Like I said this is very basic so you will probably have to change it to fit your program slightly, but it should do the trick. Hope this helps you and good luck with your project :)
The other option is to create a timer event on the event queue and listen for it in the event loop: How can I detect if the user has double-clicked in pygame?

Laview PID.vi continues when event case is False

I'm looking for a way to disable the PID.vi from running in Labview when the event case container is false.
The program controls motor position to maintain constant tension on a cable using target force and actual force as the input parameters. The output is motor position. Note that reinitialize is set to false since it needs previous instances to spool the motor.
Currently, when the event case is true the motor spools as expected and maintains the cable tension. But when the event case state is toggled the PID.vi seems to be running in the background when false causing the motor spool sporatically.
Is there a way to freeze the PID controls so that it continues from where it left off?
The PID VI does not run in the background. It only executes when you call it. That said, PID is a time-based calculation. It calculates the difference from the last time you called the VI and uses that to calculate the new values. If a lot of time passed, it will just try to fix it using that data.
If you want to freeze the value and then resume fixing smoothly, you can use the limits input on the top and set the max and min to your desired output. This will cause the PID VI to always output that value. You will probably need a feedback node or shift register to remember the last value output by the PID.
What Yair said is not entirely true - the integral and derivative terms are indeed time dependent, but the proportional is not. A great reference for understanding PIDs and how they are implemented in LabVIEW can be found here (not sure why it is archived). Also, the PID VIs are coded in G so you can simply open them to see how they operate.
If you take a closer look at the PID VI, you can see what is happening and why you might not get the response you expect. In the VI itself, dt will be either 1) what you set it to, or 2) an accumulation of time based on a tick count stored in the VI (the default). Since you have not specified a dt, the PID algorithm uses the accumulated time between calls. If you have "paused" calculation for some time, this will have an impact on the integral and derivative output.
The derivative output will kick in when there is a change in the process variable (use of the process variable prevents derivative kick). The effect of a large accumulated time between calls will be to reduce the response of this term. The time that you have paused will have a more significant impact on the integral term. Since the response of the integral portion of the controller is the proportional to the integral of the error over dt, the longer you pause the larger the response simply because because the the algorithm is performing a trapezoidal integration over dt.
My first suggestion is don't pause the controller - let the PID do what it is supposed to do. If you are using it properly, then you should not have to stop the controller action. But, if you must pause the controller action, consider re-initializing the controller. This will force the controller to reset the accumulated time term and the response in the first iteration will be purely proportional.
Hope this helps.

triggering event from default sequence to activate another sequence in test env

I have a default sequence set in the test (uvm_test) as:
uvm_config_db#(uvm_object_wrapper)::set(this, "sve.vs.main_phase", "default_sequence",main_vseq_c::type_id::get());
Unfortunately There is another sequence in the test_env which is being activated also on main_phase. called 'seq_seq_c'
How can I synchronize between the sequences? Can I use events in 'main_vseq_c' to trigger 'seq_seq_c'? and I can, how do I execute this?
You can create an uvm_event. The name of an uvm_event is unique, you can use uvm_event_pool to get the instance of the uvm_event with the same name. If an uvm_event of the name does not exist, uvm_event_pool will create one when get() is called the first time.
Both the main sequence and the other sequence get an uvm_event with the same name. The main sequence calls .wait_trigger() and the other sequence calls .trigger() of the uvm_event.

Apache Giraph : Number of vertices processed by each partition

I am a newbie trying to understand the working of Giraph 1.2.0. with hadoop 1.2.1.
Is there any way to figure out the number of vertices processed by each mapper?
The call method of org.apache.giraph.graph.ComputeCallable class is executed once per superstep. Inside this function, for each partition owned by this map task, the computePartition function is called. So, you can easily define an integer (counter) to this class. Then, in computePartition, if compute method of the vertex is called, increment the counter. Finally, at the end of call method print your counter. So, for each superstep of each mapper, it prints the number of vertices processed.

Resources