How to perform uvm_do_on without randomization? - uvm

I have a virtual sequencer from which I execute three transactions in parallel, each one on its corresponding sequencer. So I have something like this:
class top_vseqr extends uvm_seqr extends uvm_sequencer;
type_a_seqr seqr_a;
type_b_seqr seqr_b;
type_c_seqr seqr_c;
...
endclass: top_vseqr
class simple_vseq extends uvm_sequence;
`uvm_declare_p_sequencer(top_vseqr)
type_a_seq seq_a;
type_b_seq seq_b;
type_c_seq seq_c;
...
virtual task body();
fork
`uvm_do_on(seq_a, p_sequencer.seqr_a)
`uvm_do_on(seq_b, p_sequencer.seqr_b)
`uvm_do_on(seq_c, p_sequencer.seqr_c)
join
endtask: body
endclass: simple_vseq
But now I want to be able to drive specific transactions into the virtual sequencer, depending on the test I am running. To do so, I have a class with an analysis import that is updated every time the monitor sees a transaction in the interface, and a function that returns the next transaction to be driven. So now I want to do something like the following:
class test extends uvm_test;
model model_a;
simple_vseq seq;
top_vseqr virt_seqr;
...
task run_phase(uvm_phase phase);
...
seq = simple_vseq::type_id::create("seq", this);
seq.seq_a = model_a.get_sequence();
seq.start(virt_seqr);
...
endtask: run_phase
Digging through the UVM documentation I have seen that there is a 'uvm_send macro, but it doesn't allow you to select the sequencer to run the sequence on (i.e. I haven't seen a 'uvm_send_on or something like that). What can I do?
Thanks!

You can implement the contents of the uvm_do_on macro without the call to randomize() (like you showed in the second snippet) without any worries. This is anyway the suggested practice by some experts, because the sequencer/driver handshake mechanism is pretty simple. The `uvm_do* macros are not the norm, they're just there to help you out in the beginning.

I don't think there is a `uvm_send_on macro but there is a `uvm_create_on(SEQ_OR_ITEM, SEQR) macro which you can use. From the UVM documentation, this is the same as `uvm_create except that it also sets the parent sequence to the sequence in which the macro is invoked, and it sets the sequencer to the specified ~SEQR~ argument. In fact, the `uvm_create macro calls `uvm_create_on macro internally by passing m_sequencer by default. You can override it using the `uvm_create_on call.
Alternatively, you could also do a set_sequencer on your sequence_item object so that it sets the m_sequencer variable.
Hope this helps.

`uvm_do_on_with may statisfis your requirement, and you can also delete rand in your packet to disable randomization or add constraint

Related

Is FLIP-140 still correct in how it describes sorting/spilling data?

FLIP-140 states:
We will introduce a sorting step (with potential spilling, reusing the UnilateralSortMerger implementation) before every keyed operator for sorting/grouping inputs by their keys. This will allow us to process records in per-key groups, which will enable us to use a simplified implementation of a StateBackend that is not organized in key groups and only ever keeps values for a single key.
The single key at a time execution will be used for the Batch style execution as decided by the algorithm described in FLIP-134: DataStream Semantics for Bounded Input .
Moreover it will be possible to disable it through a execution.sorted-shuffles.enabled configuration option.
However I see not documentation for execution.sorted-shuffles.enabled, and no references to it in the code. So is the above description of how things work still correct? Wondering how the "only keep one key's state around" would work without sorting.
This code makes me think that both the sorting and special state backend are being used with batch execution:
private void setBatchStateBackendAndTimerService(StreamGraph graph) {
boolean useStateBackend = configuration.get(ExecutionOptions.USE_BATCH_STATE_BACKEND);
boolean sortInputs = configuration.get(ExecutionOptions.SORT_INPUTS);
checkState(
!useStateBackend || sortInputs,
"Batch state backend requires the sorted inputs to be enabled!");
if (useStateBackend) {
LOG.debug("Using BATCH execution state backend and timer service.");
graph.setStateBackend(new BatchExecutionStateBackend());
graph.setChangelogStateBackendEnabled(TernaryBoolean.FALSE);
graph.setCheckpointStorage(new BatchExecutionCheckpointStorage());
graph.setTimerServiceProvider(
BatchExecutionInternalTimeServiceManager::create);
} else {
graph.setStateBackend(stateBackend);
graph.setChangelogStateBackendEnabled(changelogStateBackendEnabled);
}
}

Implementation contraint: The class feature only_from_type uses an inline agent

Is there a semantic behind not being able to have an agent into a ensure class routine or is it a current compiler restriction?
only_from_type (some_items: CHAIN[like Current]; a_type: detachable like {ENUMERATE}.measuring_point_type_generation): LINKED_LIST[like Current]
-- extends Result with given some_items which are equal to given type
do
create Result.make
across
some_items is l_item
loop
check
attached_type: attached l_item.type as l_type
then
if attached a_type as l_a_type and then l_type.is_equal (l_a_type) then
Result.extend (l_item)
end
end
end
ensure
some_items.for_all (
agent (an_item: like Current; a_type_2: detachable like {ENUMERATE}.measuring_point_type_generation) do
if attached a_type_2 as l_type_2_a and then
attached an_item.type as l_item_type and then
l_item_type.is_equal (l_type_2_a)
then
Result := True
end
end (a_type)
)
instance_free: Class
end
gives following error
And I think there is a typo here, shouldn't it be Implementation constraint instead of contraint?
Error code: VUCR
Implementation contraint: The class feature only_from_type uses an inline agent.
What to do: Remove the inline agent from the code or make the feature a non-class one.
Class: MEASURING_POINT
Feature: only_from_type
Line: 183
some_items.for_all (
-> agent (an_item: like Current; a_type_2: detachable like {ENUMERATE}.measuring_point_type_generation) do
if attached a_type_2 as l_type_2_a and then
An inline agent (like a regular feature) takes an implicit argument - current object. In a class feature (where it is used in the example), there is no current object. Therefore, an inline agent cannot be called from a class feature.
On the other hand, it might be possible to check that the agent does not use Current, and, therefore, is safe to use in the class feature. The compiler reporting the error does not implement such a functionality, and reports an implementation constraint error instead.

SystemVerilog: How to create an interface which is an array of a simpler interfaces?

I'm attempting to create an interface that is an array of a simpler interface. In VHDL I could simply define two types, a record and an array of records. But how to do this in SystemVerilog? Here's what I've tried:
`define MAX_TC 15
...
interface scorecard_if;
score_if if_score [`MAX_TC];
endinterface
interface score_if;
integer tc;
integer pass;
integer fail;
bit flag_match;
real bandwidth;
endinterface
But I get an error from Aldec Active-HDL:
Error: VCP2571 TestBench/m3_test_load_tb_interfaces.sv : (53, 34):
Instantiations must have brackets (): if_score.
I also tried
interface scorecard_if;
score_if [`MAX_TC] if_score;
endinterface
and
interface scorecard_if;
score_if [`MAX_TC];
endinterface
but both of those just resulted in "Unexpected token" syntax errors.
Is it even possible to do this? There are two workarounds that I can think of if there isn't a way to do this. First I could define all the individual elements of score_if as unpacked arrays:
interface score_if;
integer tc [1:`MAX_TC];
integer pass [1:`MAX_TC];
integer fail [1:`MAX_TC];
bit flag_match [1:`MAX_TC];
real bandwidth [1:`MAX_TC];
endinterface
This compiles, but it's ugly in that I can no longer refer to a single score as a group.
I might also be to instantiate an array of score_if (using the original code), but I really want to instantiate scorecard_if inside a generate loop that would allow me instantiate a variable number of scorecard_if interfaces based on a parameter.
Just to provide a bit of explanation of what I'me trying to do, score_if is supposed to be a record of the score for a given test case, and scorecard_if an array for all of the test cases. But my testbench has multiple independent stimulus generators, monitors and scorecards to deal with multiple independent modules inside the DUT where the multiple is a parameter.
Part 1 : Declaring an array of interfaces
Add parentheses to the end of the interface instantiation. According to IEEE Std 1800-2012, all instantiations of hierarchical instances need the parentheses for the port list even if the port list is blank. Some tools allow dropping the parentheses if the interfaces doesn't have any ports in the declaration and and the instantiation is simple; but this is not part of the standard. Best practice is to use parentheses for all hierarchical instantiation.
Solution:
score_if if_score [`MAX_TC] () ;
Syntax Citations:
§ 25.3 Interface syntax & § A.4.1.2 Interface instantiation
interface_instantiation ::= // from A.4.1.2
interface_identifier [ parameter_value_assignment ] hierarchical_instance { , hierarchical_instance } ;
§ A.4.1.1 Module instantiation
hierarchical_instance ::= name_of_instance ( [ list_of_port_connections ] )
Part 2: Accessing elements for that array
Hierarchical references must be constant. Arrayed hierarchical instances cannot be accessed by dynamic indexes. It is an rule/limitation since at least IEEE Std 1364. Read more about it in IEEE Std 1800-2012 § 23.6 Hierarchical names, and the syntax rule is:
hierarchical_identifier ::= [ $root . ] { identifier constant_bit_select . } identifier
You could use a generate-for-loop, as it does an static unroll at compile/elaboration time. The limitation is you cannot use your display message our accumulate your fail count in the loop. You could use the generate loop to copy data to a local array and sum that, but that defeated your intention.
An interface is normally a bundle of nets used to connect modules with class-base test-bench or shared bus protocols. You are using it as a nested score card. A typedef struct would likely be better suited to your purpose. A struct is a data type and does not have the hierarchical reference limitation as modules and interfaces. It looked like you were already trying rout in your previous question. Not sure why you switched to nesting interfaces.
It looks like you are trying to create a fairly complex test environment. If so, I suggest learning UVM before spending to much time reinventing for a advance testbench architecture. Start with 1.1d as 1.2 isn't mainstream yet.
This also works:
1. define a "container" interface:
interface environment_if (input serial_clk);
serial_if eng_if[`NUM_OF_ENGINES](serial_clk);
virtual serial_if eng_virtual_if[`NUM_OF_ENGINES];
endinterface
2. in the testbench instantiate env_if connect serial_if with generate, connect the virtual if with the non virtual and pass the virtual if to the verification env:
module testbench;
....
environment_if env_if(serial_clk);
.....
dut i_dut(...);
genvar eng_idx
generate
for(eng_idx=0; eng_idx<`NUM_OF_ENGINES; eng_idx++) begin
env_if.eng_if[eng_idx].serial_bit = assign i_dut.engine[eng_idx].serial_bit;
end
endgenerate
......
initial begin
env_if.eng_virtual_if = env_if.eng_if[0:`NUM_OF_ENGINES-1];
//now possible to iterate over eng_virtual_if[]
for(int eng_idx=0; eng_idx<`NUM_OF_ENGINES; eng_idx++)
uvm_config_db#(virtual serial_if)::set(null, "uvm_test_top.env", "tx_vif", env_if.env_virtual_if[eng_idx]);
end
endmodule

what's the different between SqlMapClient and SqlMapSeesion in ibatis?

When I read ibatis-sqlmap-2.3.4,I find They both implements SqlMapExecutor.
SqlMapClientImpl do insert with localSqlMapSession which provide thread safe.
But in spring2.5.6, the execute method of SqlMapClientTemplate use SqlMapClientImpl like this:
SqlMapSession session = this.sqlMapClient.openSession();
...
return action.doInSqlMapClient(session);
The openSession method return a new SqlMapSessionImpl each time.
My questions are:
Why SqlMapClientTemplate use sqlMapSeesion instead of sqlMapClient ?
Why localSqlMapSession of sqlMapClient is unused in SqlMapClientTemplate ? use like this:
return action.doInSqlMapClient(this.sqlMapClient);
what's the different between SqlMapClient and SqlMapSeesion ?
for your first question, spring-orm explain in the comment:
// We always need to use a SqlMapSession, as we need to pass a Spring-managed
// Connection (potentially transactional) in. This shouldn't be necessary if
// we run against a TransactionAwareDataSourceProxy underneath, but unfortunately
// we still need it to make iBATIS batch execution work properly: If iBATIS
// doesn't recognize an existing transaction, it automatically executes the
// batch for every single statement...
the answer to difference between ibatis' SqlMapClient and SqlMapSession can be found in interface SqlMapClient's comments:
/**
* Returns a single threaded SqlMapSession implementation for use by
* one user. Remember though, that SqlMapClient itself is a thread safe SqlMapSession
* implementation, so you can also just work directly with it. If you do get a session
* explicitly using this method <b>be sure to close it!</b> You can close a session using
* the sqlMapSession.close() method.
* <p/>
*
* #return An SqlMapSession instance.
*/
public SqlMapSession openSession();

How to make the program work this way?

So i have a program that does these calculations with numbers. The program is threaded, and the number of threads are specified from the user.
I will give a close example
static void *program_thread(void *thread)
{
bool somevar = true;
if(somevar)
{
work = getwork();
}
dowork(work);
if(condition1 blah blah)
somevar = false; /* disable getwork */
if(condition2)
somevar = true; /* condition was either met or not met, so we request
new work either way */
}
Then with pthreads(and i will skip some code) i do
int main(blah)
{
if (pthread_create(&thr->pth, NULL, program_thread, thread_number)) {
printf("%s","program thread create failed");
return 1;
}
}
Now i will start explaining. The number of threads created are specified from the user, so i do a for loop and create as many threads as i need.
Each thread calls
work = getwork();
Thus getting independant work to do, however the CPU is slow for this kind of job. It tries to compute something by trying 2^32 numbers(which is from 1 to 4 294 967 296)
But my CPU can only do around 3 million numbers per second, and by the time it reaches 4 billion numbers, it's restarted(for new work).
So i then thought of a better method. Instead of each thread getting totally different work, all the threads should get the same work and split the numbers they need to try.
The problem is, that i can't controll what work it get's, so i must fetch
work = getwork();
Before initiating the threads. The question is HOW? Using pthread_create obviously...but then what?
You get more than one way to do it:
split your work package into smaller parts (thus, your getWork returns a new, smaller work)
store your work in a common place, that you access from your thread using a reader-writer pattern
from the pthread API, the 4th parameter is given to your thread, you can do something like the following code :
Work = getWork();
if (pthread_create(&thr->pth, NULL, program_thread, (void*) &work))
...
And your program_thread function would be like that
static void *program_thread(void *pxThread)
{
Work* pWork = (Work*) pxThread;
...
Of course, you need to check the validaty of the pointer and common stuff (in my example, I created it on stack which is most probably a bad idea). Note that your code is givig a thread_number as a pointer, which is usually a bad idea. If you want to have more information transfered to your thread, simply hide it into a structure.
I'm not sure I fully understood your issue, but this could give you some hints most probably. Please note also that when doing multithreading, you need to take into account specific issues like race conditions, concurrent access and more complex lifecycle of objects...

Resources