Finding the correct traversal path for a packet - unetstack

Suppose a scenario similar to the above image where Node-A and Node-B are sending data to Node-D via Node-C. Node-A and Node-B each sent one data packet to Node-D. Node-A sent one msg with msg-id = 1 which is received at Node-C with msg-id = 1 and Node-B sent one msg with msg-id = 3 which is received at node-C with msg-id = 3. Now, how I will get to know that the msg forwarded from Node-C with msg-id = 2 is the msg from Node-A or Node-B and msg forwarded from Node-C with msg-id = 4 is the msg from Node-A or Node-B? How I will follow the correct path while traversing the trace.json file of simulation?

The trace.json file has information about the thread of events and stimulus that is related to each event. This may provide you the relevant information for your tracing.
To illustrate the idea, I take an example of how the ranging agent works. Other agents such as router work similarly, so you can use the same idea for tracing routed frames.
See the following entry from a trace.json output of a ranging simulation:
{
"time":1645606042088,
"component":"ranging::org.arl.unet.localization.Ranging/A",
"threadID":"88b89b82-eb37-4ac5-83da-3695acb80e7f",
"stimulus":{"clazz":"org.arl.unet.phy.RxFrameNtf","messageID":"88b89b82-eb37-4ac5-83da-3695acb80e7f","performative":"INFORM","sender":"phy","recipient":"#phy__ntf"},
"response":{"clazz":"org.arl.unet.phy.TxFrameReq","messageID":"ee9f72f5-5753-4d46-a885-9e446bbf9746","performative":"REQUEST","recipient":"phy"}
}
This entry shows that the received frame (RxFrameNtf) with ID 88b89b82... caused the transmission of a new frame (TxFrameReq) with ID ee9f72f5.... The threadID entry can also be helpful, as the same threadID is maintained for the whole chain of events within a single node.
Each of your frames from nodes A and B will have unique IDs, and so will each of the frames relayed by node C. The trace.json entry corresponding to each relay transmission should tell you which stimulus (frame from node A or B) resulted in the transmission.
For your application, I extract a few of the JSON entries from your trace.json to illustrate this:
{
"time":10000, "component":"router::org.arl.unet.net.Router/A", "threadID":"2e37d6d6-a3d2-4e0f-aacc-456efdae91bb",
"stimulus":{"clazz":"org.arl.unet.DatagramReq", "messageID":"2e37d6d6-a3d2-4e0f-aacc-456efdae91bb", "performative":"REQUEST", "sender":"simulator", "recipient":"router"},
"response":{"clazz":"org.arl.unet.DatagramReq", "messageID":"54a55a12-4051-4934-88bc-beb4fa28c548", "performative":"REQUEST", "recipient":"uwlink"}
}
{
"time":10491, "component":"uwlink::org.arl.unet.link.ReliableLink/A", "threadID":"54a55a12-4051-4934-88bc-beb4fa28c548",
"stimulus":{"clazz":"org.arl.unet.DatagramReq", "messageID":"54a55a12-4051-4934-88bc-beb4fa28c548", "performative":"REQUEST", "sender":"router", "recipient":"uwlink"},
"response":{"clazz":"org.arl.unet.phy.TxFrameReq", "messageID":"87757d67-7510-4856-a9ee-a27924f9548a", "performative":"REQUEST", "recipient":"phy"}
}
{
"time":10491, "component":"phy::org.arl.unet.sim.HalfDuplexModem/A", "threadID":"87757d67-7510-4856-a9ee-a27924f9548a",
"stimulus":{"clazz":"org.arl.unet.phy.TxFrameReq", "messageID":"87757d67-7510-4856-a9ee-a27924f9548a", "performative":"REQUEST", "sender":"uwlink", "recipient":"phy"},
"response":{"clazz":"org.arl.unet.sim.HalfDuplexModem$TX", "messageID":"31b892fd-1b2c-40b1-a1d6-4dc457f57a2c", "performative":"INFORM", "recipient":"phy"}
}
{
"time":11870, "component":"phy::org.arl.unet.sim.HalfDuplexModem/C", "threadID":"31b892fd-1b2c-40b1-a1d6-4dc457f57a2c",
"stimulus":{"clazz":"org.arl.unet.sim.HalfDuplexModem$TX", "messageID":"31b892fd-1b2c-40b1-a1d6-4dc457f57a2c", "performative":"INFORM", "sender":"phy", "recipient":"phy"},
"response":{"clazz":"org.arl.unet.phy.RxFrameNtf", "messageID":"5cd195ef-74b0-4394-8f9a-9077de08bc56", "performative":"INFORM", "sender":"phy", "recipient":"#phy__ntf"}
}
{
"time":11870, "component":"router::org.arl.unet.net.Router/C", "threadID":"5cd195ef-74b0-4394-8f9a-9077de08bc56",
"stimulus":{"clazz":"org.arl.unet.phy.RxFrameNtf", "messageID":"5cd195ef-74b0-4394-8f9a-9077de08bc56", "performative":"INFORM", "sender":"phy", "recipient":"#phy__ntf"},
"response":{"clazz":"org.arl.unet.DatagramReq", "messageID":"e9ecfc32-ff9a-4750-99d5-b3b462dcd660", "performative":"REQUEST", "recipient":"uwlink"}
}
{
"time":12259, "component":"uwlink::org.arl.unet.link.ReliableLink/C", "threadID":"e9ecfc32-ff9a-4750-99d5-b3b462dcd660",
"stimulus":{"clazz":"org.arl.unet.DatagramReq", "messageID":"e9ecfc32-ff9a-4750-99d5-b3b462dcd660", "performative":"REQUEST", "sender":"router", "recipient":"uwlink"},
"response":{"clazz":"org.arl.unet.phy.TxFrameReq", "messageID":"76200f60-2334-4f20-88a3-9c2d42a769ad", "performative":"REQUEST", "recipient":"phy"}
}
{
"time":12259, "component":"phy::org.arl.unet.sim.HalfDuplexModem/C", "threadID":"76200f60-2334-4f20-88a3-9c2d42a769ad",
"stimulus":{"clazz":"org.arl.unet.phy.TxFrameReq", "messageID":"76200f60-2334-4f20-88a3-9c2d42a769ad", "performative":"REQUEST", "sender":"uwlink", "recipient":"phy"},
"response":{"clazz":"org.arl.unet.sim.HalfDuplexModem$TX", "messageID":"f3161e4e-1007-4540-8192-f0d7bf80e126", "performative":"INFORM", "recipient":"phy"}
}
{
"time":13542, "component":"phy::org.arl.unet.sim.HalfDuplexModem/D", "threadID":"f3161e4e-1007-4540-8192-f0d7bf80e126",
"stimulus":{"clazz":"org.arl.unet.sim.HalfDuplexModem$TX", "messageID":"f3161e4e-1007-4540-8192-f0d7bf80e126", "performative":"INFORM", "sender":"phy", "recipient":"phy"},
"response":{"clazz":"org.arl.unet.phy.RxFrameNtf", "messageID":"ab5e15a9-27e0-4495-b0b8-f199159cb2a3", "performative":"INFORM", "sender":"phy", "recipient":"#phy__ntf"}
}
{
"time":13542, "component":"uwlink::org.arl.unet.link.ReliableLink/D", "threadID":"ab5e15a9-27e0-4495-b0b8-f199159cb2a3",
"stimulus":{"clazz":"org.arl.unet.phy.RxFrameNtf", "messageID":"ab5e15a9-27e0-4495-b0b8-f199159cb2a3", "performative":"INFORM", "sender":"phy", "recipient":"#phy__ntf"},
"response":{"clazz":"org.arl.unet.DatagramNtf", "messageID":"bb534fc0-47ec-4b2d-8124-07af87528d37", "performative":"INFORM", "recipient":"#uwlink__ntf"}
}
{
"time":13542, "component":"router::org.arl.unet.net.Router/D", "threadID":"bb534fc0-47ec-4b2d-8124-07af87528d37",
"stimulus":{"clazz":"org.arl.unet.DatagramNtf", "messageID":"bb534fc0-47ec-4b2d-8124-07af87528d37", "performative":"INFORM", "sender":"uwlink", "recipient":"#uwlink__ntf"},
"response":{"clazz":"org.arl.unet.DatagramNtf", "messageID":"5e660793-9a44-4341-8843-fc7fa91aa450", "performative":"INFORM", "recipient":"#router__ntf"}
}
These entries show the sequence of events that occurred:
time 10000: DatagramReq from router#A to uwlink#A
time 10491: TxFrameReq from uwlink#A to phy#A
time 10491: TX from phy#A to phy#C
time 11870: RxFrameNtf from phy#C (publish on topic)
time 11870: DatagramReq from router#C to uwlink#C
time 12259: TxFrameReq from uwlink#C to phy#C
time 12259: TX from phy#C to phy#D
time 13542: RxFrameNtf from phy#D (publish on topic)
time 13542: DatagramNtf from uwlink#D (publish on topic)
time 13542: DatagramNtf from router#D (publish on topic)
You should find that each JSON entry's response.messageID corresponds to the next JSON entry's stimulus.messageID. This allows you to follow through the sequence of events.

Related

Camel reactive streams not completing when subscribed more than once

#Component
class TestRoute(
context: CamelContext,
) : EndpointRouteBuilder() {
val streamName: String = "news-ticker-stream"
val logger = LoggerFactory.getLogger(TestRoute::class.java)
val camel: CamelReactiveStreamsService = CamelReactiveStreams.get(context)
var count = 0L
val subscriber: Subscriber<String> =
camel.streamSubscriber(streamName, String::class.java)
override fun configure() {
from("timer://foo?fixedRate=true&period=30000")
.process {
count++
logger.info("Start emitting data for the $count time")
Flux.fromIterable(
listOf(
"APPLE", "MANGO", "PINEAPPLE"
)
)
.doOnComplete {
logger.info("All the data are emitted from the flux for the $count time")
}
.subscribe(
subscriber
)
}
from(reactiveStreams(streamName))
.to("file:outbox")
}
}
2022-07-07 13:01:44.626 INFO 50988 --- [1 - timer://foo] c.e.reactivecameltutorial.TestRoute : Start emitting data for the 1 time
2022-07-07 13:01:44.640 INFO 50988 --- [1 - timer://foo] c.e.reactivecameltutorial.TestRoute : All the data are emitted from the flux for the 1 time
2022-07-07 13:01:44.646 INFO 50988 --- [1 - timer://foo] a.c.c.r.s.ReactiveStreamsCamelSubscriber : Reactive stream 'news-ticker-stream' completed
2022-07-07 13:02:14.616 INFO 50988 --- [1 - timer://foo] c.e.reactivecameltutorial.TestRoute : Start emitting data for the 2 time
2022-07-07 13:02:44.610 INFO 50988 --- [1 - timer://foo] c.e.reactivecameltutorial.TestRoute : Start emitting data for the 3 time
2022-07-07 13:02:44.611 WARN 50988 --- [1 - timer://foo] a.c.c.r.s.ReactiveStreamsCamelSubscriber : There is another active subscription: cancelled
The reactive stream are not getting completed when running for more than 1 times. So, as you can see in the logs the log message which I have added doOnComplete is only coming for the first time when timer route was triggered. When the timer route is triggered for the second time then there is no completion message. I tried to put the break point in the ReactiveStreamsCamelSubscriber, and found that for the 1st time the flow is going into the onNext() and onComplete() methods but the flow is not going into these method when the timer ran for 2nd time. I am not able to understand why this scenario is playing out?

How to calculate Round Trip Time in Ping utility implementation in UnetStack

I had developed ping utility similar to the ping example available in UnetStack1.3 (/samples/ping) to ping a remote node over a multi-hop link, but unable to calculate the Round Trip Time (RTT), when I transmit ping packet using routing agent with static route information added to routing table using RouteDiscoveryNtf as there is no timing information avaiable in upper layer notifications (DatagramNtf or DatagramDeliveryNtf or DatagramFailureNtf).
Calculation of round trip time is the difference of rxtime and txtime available with TxFrameNtf and RxFrameNtf as implemented in clousure (fshrc.groovy) in the ping example.
I also tried analyzing the ping utility implemented in UnetStack3, but unable to makeout. Please let me know how the RTT is calculated.
Here's a simplified version of the implementation of the ping command in UnetStack3:
def ping(int n, int m = 3, long timeout = 30000) {
println "PING $n"
AgentID router = agentForService(Services.ROUTING)
int p = 0
m.times { count ->
def t0 = currentTimeMillis()
router << new DatagramReq(to: n, reliability: true)
def ntf = receive({
it instanceof DatagramDeliveryNtf || it instanceof DatagramFailureNtf
}, timeout)
def t = currentTimeMillis()-t0
if (ntf == null || ntf instanceof DatagramFailureNtf) {
println "Request timeout for seq $count"
} else {
p++
println "Response from $n: seq=$count rthops=2 time=$t ms"
}
delay(5000)
}
println "$m packets transmitted, $p packets received, ${Math.round(100*(m-p)/m)}% packet loss"
}

Missing events for listeners dronekit

I'm using dronekit and using event listeners to keep a track of camera video recording Status. This is because I didn't find a way to identify the recording status. So I'm keeping track of the commands that I'm sending and changing the modes if they are successful.
But I observed that my listener is not receiving all events. Is this a common issue? Can it be Fixed? Is there a frequency setting that I need to change?
#vehicle.on_message('GOPRO_SET_RESPONSE')
def listener(self, name, message):
global mode, recording, way_points, nadir_taken
if message.cmd_id == 2:
log.debug('Shutter:%s' % message)
if message.status == 0:
if mode == MODE_VIDEO:
if recording:
recording = False
log.info("Stopped video")
# message_handler.set(message_handler.get() + " Stopped Recording.")
record_handler.set(NO_STRING)
plot.info(STOP_STRING_VIDEO)
note.info(STOP_STRING_VIDEO)
thread.start_new(speak, (VIDEO_RECORD_ON_MSG,))
else:
recording = True
log.info("started recording video")
# message_handler.set(message_handler.get() + "\n Started Recording.")
record_handler.set(YES_STRING)
plot.info(START_STRING_VIDEO)
note.info(START_STRING_VIDEO)
thread.start_new(speak, (VIDEO_RECORD_OFF_MSG,))
else:
log.info("Image Captured at %s", str(loc))
else:
log.info('Unidentified Message:%s' % message)

how to make sure that flink job has finished executing and then perform some tasks

I want to perform some tasks after flink job is completed,I am not having any issues when I run code in Intellij but there are isssues when I run Flink jar in a shell file. I am using below line to make sure that execution of flink program is complete
//start the execution
JobExecutionResult jobExecutionResult = envrionment.execute(" Started the execution ");
is_job_finished = jobExecutionResult.isJobExecutionResult();
I am not sure, if above check is correct or not ?
Then I am using the above varible in below method to perform some tasks
if(print_mode && is_job_finished){
System.out.println(" \n \n -- System related variables -- \n");
System.out.println(" Stream_join Window length = " + WindowLength_join__ms + " milliseconds");
System.out.println(" Input rate for stream RR = " + input_rate_rr_S + " events/second");
System.out.println("Stream RR Runtime = " + Stream_RR_RunTime_S + " seconds");
System.out.println(" # raw events in stream RR = " + Total_Number_Of_Events_in_RR + "\n");
}
Any suggestions ?
You can register a job listener to execution environment.
For example
env.registerJobListener(new JobListener {
//Callback on job submission.
override def onJobSubmitted(jobClient: JobClient, throwable: Throwable): Unit = {
if (throwable == null) {
log.info("SUBMIT SUCCESS")
} else {
log.info("FAIL")
}
}
//Callback on job execution finished, successfully or unsuccessfully.
override def onJobExecuted(jobExecutionResult: JobExecutionResult, throwable: Throwable): Unit = {
if (throwable == null) {
log.info("SUCCESS")
} else {
log.info("FAIL")
}
}
})
Register a JobListener to your StreamExecutionEnvironment.
JobListener is grate program if not SQL API.
if use SQL API, onJobExecuted will never be called. I have a idea, you can refer to it. the source is Kafka, sink can use any type.
let me explain it :
EndSign: follow to last data. when your Flink job consumed it, meaning the partition element rest is empty.
Close loigcal:
When you flink job processing EndSign. job need to call JobController, then JobController counter +1
Until the JobController counter equals partition count. then JobController will check consumer group lag, ensure Flink job get all data.
Now, we know the job is finished

OpenFlow - How are ICMP messages handled

I am running a Ryu controller and a Mininet instance with 2 hosts and 1 switch like below.
H1---S---H2
Code in Ryu controller
from ryu.base import app_manager
from ryu.controller import ofp_event
from ryu.controller.handler import CONFIG_DISPATCHER, MAIN_DISPATCHER
from ryu.controller.handler import set_ev_cls
from ryu.ofproto import ofproto_v1_3
from ryu.lib.packet import packet
from ryu.lib.packet import ethernet
from ryu.lib.packet import ether_types
class SimpleSwitch13(app_manager.RyuApp):
OFP_VERSIONS = [ofproto_v1_3.OFP_VERSION]
def __init__(self, *args, **kwargs):
super(SimpleSwitch13, self).__init__(*args, **kwargs)
self.mac_to_port = {}
#set_ev_cls(ofp_event.EventOFPSwitchFeatures, CONFIG_DISPATCHER)
def switch_features_handler(self, ev):
datapath = ev.msg.datapath
ofproto = datapath.ofproto
parser = datapath.ofproto_parser
Basically the switch flow table is empty. In this case, when I run h1 ping h2 from my mininet console and record the packet exchanges, this is what I get in wireshark from host h1.
There is no router in the mininet instance. How am I receiving an ICMP Host Destination Unreachable Message from the same host that initiated the ping?
The app code you posted is not complete.
For complete simple_switch_13.py, you can get it from the osrg github.
Take a look, it is like this:
class SimpleSwitch13(app_manager.RyuApp):
OFP_VERSIONS = [ofproto_v1_3.OFP_VERSION]
def __init__(self, *args, **kwargs):
super(SimpleSwitch13, self).__init__(*args, **kwargs)
self.mac_to_port = {}
#set_ev_cls(ofp_event.EventOFPSwitchFeatures, CONFIG_DISPATCHER)
def switch_features_handler(self, ev):
datapath = ev.msg.datapath
ofproto = datapath.ofproto
parser = datapath.ofproto_parser
match = parser.OFPMatch()
actions = [parser.OFPActionOutput(ofproto.OFPP_CONTROLLER,
ofproto.OFPCML_NO_BUFFER)]
self.add_flow(datapath, 0, match, actions)
def add_flow(self, datapath, priority, match, actions, buffer_id=None):
ofproto = datapath.ofproto
parser = datapath.ofproto_parser
inst = [parser.OFPInstructionActions(ofproto.OFPIT_APPLY_ACTIONS,
actions)]
if buffer_id:
mod = parser.OFPFlowMod(datapath=datapath, buffer_id=buffer_id,
priority=priority, match=match,
instructions=inst)
else:
mod = parser.OFPFlowMod(datapath=datapath, priority=priority,
match=match, instructions=inst)
datapath.send_msg(mod)
#set_ev_cls(ofp_event.EventOFPPacketIn, MAIN_DISPATCHER)
def _packet_in_handler(self, ev):
# If you hit this you might want to increase
# the "miss_send_length" of your switch
if ev.msg.msg_len < ev.msg.total_len:
self.logger.debug("packet truncated: only %s of %s bytes",
ev.msg.msg_len, ev.msg.total_len)
msg = ev.msg
datapath = msg.datapath
ofproto = datapath.ofproto
parser = datapath.ofproto_parser
in_port = msg.match['in_port']
pkt = packet.Packet(msg.data)
eth = pkt.get_protocols(ethernet.ethernet)[0]
if eth.ethertype == ether_types.ETH_TYPE_LLDP:
# ignore lldp packet
return
dst = eth.dst
src = eth.src
dpid = datapath.id
self.mac_to_port.setdefault(dpid, {})
self.logger.info("packet in %s %s %s %s", dpid, src, dst, in_port)
# learn a mac address to avoid FLOOD next time.
self.mac_to_port[dpid][src] = in_port
if dst in self.mac_to_port[dpid]:
out_port = self.mac_to_port[dpid][dst]
else:
out_port = ofproto.OFPP_FLOOD
actions = [parser.OFPActionOutput(out_port)]
# install a flow to avoid packet_in next time
if out_port != ofproto.OFPP_FLOOD:
match = parser.OFPMatch(in_port=in_port, eth_dst=dst)
# verify if we have a valid buffer_id, if yes avoid to send both
# flow_mod & packet_out
if msg.buffer_id != ofproto.OFP_NO_BUFFER:
self.add_flow(datapath, 1, match, actions, msg.buffer_id)
return
else:
self.add_flow(datapath, 1, match, actions)
data = None
if msg.buffer_id == ofproto.OFP_NO_BUFFER:
data = msg.data
out = parser.OFPPacketOut(datapath=datapath, buffer_id=msg.buffer_id,
in_port=in_port, actions=actions, data=data)
datapath.send_msg(out)
This simple_switch_13.py app only handles layer 2 forwarding, which is your case.
As you can see, after the connection established, the switch_features_handler will listen on this event and add a send all flow to controller flow on the switch.(table-miss flow)
And for the normal states, when the controller receives PACKET_IN, it will check if the dst_MAC is in the mac_to_port. If yes, then output to the port, and at the same time insert a flow(whose match field is inport and dst_MAC); else(not in the array), the action is set to be FLOOD by assigning the outport=FLOOD.
That's the case in Layer 2 switching.
For ICMP messages handling in layer 3 switching, you need to read the rest_router.py code, which is a lot more complicated.
You get ICMP Host Destination Unreachable because the ARP request is never answered by h2.
Since h1 gets no ARP reply, ICMP error message comes from its own IP stack.

Resources