uvm monitor methodology & run_phase - uvm

I'm wondering if I have a miss understanding about the uvm methodology of the monitor run_phase task. The DUT sends out multiple clocks with data that the monitor is watching and checking, keeping the different clock domains separate. So my run phase task looks like
forever begin
fork
begin #(posedge clk1) begin
..code to capture data..
end end
begin #(posedge clk2) begin
..code to capture data in this domain...
end end
join_any
disable fork;
My 'problem' is if clk1 and clk2 are aligned then only one of the posedge statements gets executed. Additionally if I want my monitor to perform some other operations on a third async domain say, at a multiple of clk1 or clk2 then there is a problem when the third domain lines up with clk1 or clk2.
How is the monitor suppose to work in multiple clock domains in its run phase forever loop?

Usually when monitoring two different clock domains they are kept as separate forever-loop threads. There could be scenario you want to conditionally disable the other clock domain, but I doubt this is what you intend.
fork
forever #(posedge clk1) begin
..code to capture data..
end
forever #(posedge clk2) begin
..code to capture data in this domain...
end
join // or join_none

Related

How can I see my pending transactions in the BSC pending pool?

I'm currently trying to get data from the BSC pending transactions, so I have been using this coding lines to see the changes in the mempool:
web3_filter= web3.eth.filter('pending')
transaction_hashes = web3.eth.getFilterChanges(web3_filter.filter_id)
for tx in transaction_hashes:
Datatx = web3.eth.getTransaction(tx)
Seems that it works, I can see new pending transactions that are added in the pool and refreshed in a while loop. But when I execute a "swapExactTokensForETH" to test and see my tx in the mempool it doesn't appear. What am I doing wrong! Is there anything I have been missing?

How to execute a sample just before thread shutdown in Jmeter?

Is there a way in Jmeter to execute a sample just before thread shutdown?
For example, I have a test plan that inserts data into a database and autocommit is disabled on the connection. Each thread spawns its own connection to the database. Plan runs on a schedule (i.e. I don't know samples count) and I want to commit all inserted rows at the end of the test. Is there a way to do that?
The easiest is going for tearDown Thread Group which is designed for performing clean-up actions.
The harder way is to add a separate Thread Group with 1 thread and 1 iteration and 1 JSR223 Sampler with the following Groovy code:
class ShutdownListener implements Runnable {
#Override
public void run() {
//your code which needs to be executed before test ends
}
}
new ShutdownListener().run()
Try running the commit sample based on some if condition w.r.t duration or iterationnum
For ex: if you are supposed to run 100 iterations :
An If controller with the condition -
__groovy(${__iterationNum}==100)
should help.
ok this might not be the most optimal but could be workable
Add the following code in a JSRSampler inside a onceonly controller
def scenarioStartTime = System.currentTimeMillis();
def timeLimit= ctx.getThreadGroup().getDuration()-10; //Timelimit to execute the commit sampler
vars.put("scenarioStartTime",scenarioStartTime.toString());
vars.put("timeLimit",timeLimit.toString());
Now after your DB insert sampler add the following condition in a if controller and add the commit sampler.
${__groovy(System.currentTimeMillis()-Long.valueOf(vars.get("scenarioStartTime"))>=Long.valueOf(vars.get("timeLimit"))*1000)}
This condition should let you execute the commit sampler just before the end of test duration.

Multiplexing Service Broker messages

I notice in the documentation for the SEND statement that it allows for sending the same message on multiple conversation handles at once. Let's say that in my situation, the number of places I want to send a given message is small (fewer than 5), but every message I want to send should go to all of those places. Is there any practical difference between the following:
declare #ch1 uniqueidentifier,
#ch2 uniqueidentifier,
#ch3 uniqueidentifier,
#message xml;
-- approach #1
send on conversation (#ch1, #ch2, #ch3)
message type [foo]
(#message);
-- approach #2
send on conversation (#ch1)
message type [foo]
(#message);
send on conversation (#ch2)
message type [foo]
(#message);
send on conversation (#ch3)
message type [foo]
(#message);
SEND ON (#h1, #h2, #h3, ... , #hN) is going to store the message body only once in sys.transmission_queue. As opposed to SEND ON (#h1), SEND ON (#h2), ... , SEND ON (#hN) which will store the message body N times. This is, basically, the real difference. If the message body is of significant size it can have noticeable impact on perf.
For local delivery, when sys.transmission_queue is usually not involved, there will be no significant difference.
I depends on you needs, As you say you have only five conversion you can go both ways . No difference between them... But here is the catch
Do you relay want to check your message individually(whether it went or not) OR you want to rollback from certain case or person
Do you want to count or do something on the sending process
Your 1st approach is like a machine gun.In small case it wont create pressure or data lose in a server but in large case i can not give you the guaranty(i mean Like machine gun, Server do jam ).
Messages in the transmission queues for an instance are transmitted in sequence based on:
The priority level of their associated conversation endpoint.
Within priority level, their send sequence in the conversation.

Lua - Socket receive with timer for other event

I am trying to implement a script with a server socket that will also periodically poll for data from several sensors (i.e on 59th second of every minute). I do not want to serialize the data to disk but rather keep it in a table which the socket will respond with when polled.
Here's some sketch the code to illustrate what I am trying to do (I've not included the client code that accesses this server, but that part is OK)
#!/usr/bin/env lua
local socket = require("socket")
local server = assert(socket.bind("*", 0))
local ip, port = server:getsockname()
local data = {}
local count = 1
local function pollSensors()
-- I do the sensor polling here and add to table e.g os.time()
table.insert(data, os.time() .."\t" .. tostring(count))
count = count + 1
end
while true do
local client = server:accept()
client:settimeout(2)
local line, err = client:receive()
-- I do process the received line to determine the response
-- for illustration I'll just send the number of items in the table
if not err then client:send("Records: " ..table.getn(data) .. "\n") end
client:close()
if os.time().sec == 59 then
pollSensors()
end
end
I am concerned that the server may on occasion(s) block and therefore I'll miss the 59th second.
Is this a good way to implement this or is there a (simpler) better way to do this (say using coroutines)? If coroutines would be better, how do I implement them for my scenario?
To accomplish this you need some sort of multitasking.
I'd use a network aware scheduler.
e.g. cqueues would look like this:
local cqueues = require "cqueues"
local cs = require "cqueues.socket"
local data = {}
local count = 1
local function pollSensors()
-- I do the sensor polling here and add to table e.g os.time()
table.insert(data, os.time() .."\t" .. tostring(count))
count = count + 1
end
local function handle_client(client)
client:setmode("b", "bn") -- turn on binary mode for socket and turn off buffering
-- ported code from question:
client:settimeout(2) -- I'm not sure why you chose a 2 second timeout
local line, err = client:read("*l") -- with cqueues, this read will not block the whole program, but just yield the current coroutine until data arrives.
-- I do process the received line to determine the response
-- for illustration I'll just send the number of items in the table
if not err then
assert(client:write(string.format("Records: %d\n", #data)))
end
client:close()
end
local cq = cqueues.new() -- create a new scheduler
-- create first coroutine that waits for incoming clients
cq:wrap(function()
local server = cs.listen{host = "0.0.0.0"; port = "0"}
local fam, ip, port = server:localname()
print(string.format("Now listening on ip=%s port=%d", ip, port))
for client in server:clients() do -- iterates over `accept`ed clients
-- create a new coroutine for each client, passing the client in
cqueues.running():wrap(handle_client, client)
end
end)
-- create second coroutine that reads sensors
cq:wrap(function()
while true do
-- I assume you just wanted to read every 60 seconds; rather than actually *on* the 59th second of each minute.
pollSensors()
cqueues.sleep(60)
end
end)
-- Run scheduler until all threads exit
assert(cq:loop())
I think the periodically launching some apps/codes are good realized with 'cron' libraries in different languages.
For instance, cron lib in lua you can download here.

JMeter script that loops and increases throughput until failure

Curious if anyone has any ideas on how I might create a jmeter script that will loop the scenario while increasing throughput and load until an error is received.
Any guidance would be appreciated.
You can go the following way:
Set your test to run "Forever" on Thread Group level
Set "Action to be taken on a Sampler error" to "Stop Test"
Add a Constant Throughput Timer to your test plan with very low initial value like 60 requests per minute (= 1 request per second)
Despite "Constant" word in its name, Constant Throughput Timer value can be changed on-the-fly see official documentation for example.

Resources