Logging packet sent time by physical layer - unetstack

From the sender node i am sending packet by command in simulation script:
router << new DatagramReq(to: 1, data:[1,1,1,1], protocol:Protocol.DATA);
I want to know, how to log the Datagram sent time by Physical layer of the sender in log-0.txt file

Setting the log level of the agent to FINE should log all messages sent by that agent. The logger name for each agent follows the class name of the agent, by convention. If you do a ps, you'll see the class names of the agents:
> ps
remote: org.arl.unet.remote.RemoteControl - IDLE
rdp: org.arl.unet.net.RouteDiscoveryProtocol - IDLE
ranging: org.arl.unet.phy.Ranging - IDLE
uwlink: org.arl.unet.link.ReliableLink - IDLE
node: org.arl.unet.nodeinfo.NodeInfo - IDLE
phy: org.arl.unet.sim.HalfDuplexModem - IDLE
arp: org.arl.unet.addr.AddressResolution - IDLE
transport: org.arl.unet.transport.SWTransport - IDLE
router: org.arl.unet.net.Router - IDLE
mac: org.arl.unet.mac.CSMA - IDLE
You can see that the phy agent class is org.arl.unet.sim.HalfDuplexModem, if you're running a simulator. You can set the log level of the simulation classes to INFO:
> logLevel 'org.arl.unet.sim', FINE
Now, the logs will show the messages (and their corresponding times). Example:
1567801752688|FINE|org.arl.unet.sim.HalfDuplexModem/A#47:handleTxFrameReq|TxFrameReq:REQUEST[type:DATA to:1 (4 bytes)] Data:
01010101
1567801752741|FINE|org.arl.unet.sim.HalfDuplexModem/A#47:sendTxFrameNtf|TxFrameNtf:INFORM[type:DATA txTime:753410296]
1567801753392|FINE|org.arl.unet.sim.HalfDuplexModem/B#63:sendRxFrameStartNtf|RxFrameStartNtf:INFORM[type:DATA rxTime:3515132027]
1567801754095|FINE|org.arl.unet.sim.HalfDuplexModem/B#63:sendRxFrameNtf|RxFrameNtf:INFORM[type:DATA from:232 to:1 rxTime:3515132027 (4 bytes)] Data:
01010101

Related

VOLTTRON Actuator agent failure message but it appears to be working just fine

I'm using the actuator agent get_multiple_points with VOLTTRON 8.1.3 to make about 30 BACnet read requests of sensors with:
zone_setpoints_data = self.vip.rpc.call('platform.actuator', 'get_multiple_points', actuator_get_this_data).get(timeout=300)
And I notice this debug message:
2022-06-09 19:55:21,927 (loadshedagent-0.1 2930461) __main__ DEBUG: [Simple DR Agent INFO] - ACTUATOR SCHEDULE EVENT SUCESS {'result': 'FAILURE', 'data': {}, 'info': 'REQUEST_CONFLICTS_WITH_SELF'}
But I have the data, like it appears to be working just fine in addition to the 1 minute interval scrape all BACnet devices inside the building. Anything to worry about or should I make some sort of adjustment?
EDIT
Code snip for scheduling the actuator below. Am I scheduling the actuator agent wrong with the _now,str_start,_end,str_end on 30 devices for get_multiple_points? Should I be adjusting this td(seconds=10) uniquely to space out the call for each device?
# create start and end timestamps for actuator agent scheduling
_now = get_aware_utc_now()
str_start = format_timestamp(_now)
_end = _now + td(seconds=10)
str_end = format_timestamp(_end)
actuator_schedule_request = []
for group in self.nested_group_map.values():
for device_address in group.values():
device = '/'.join([self.building_topic, str(device_address)])
actuator_schedule_request.append([device, str_start, str_end])
# use actuator agent to get all zone temperature setpoint data
result = self.vip.rpc.call('platform.actuator', 'request_new_schedule', self.core.identity, 'my_schedule', 'HIGH', actuator_schedule_request).get(timeout=90)
It seems to me that getting points won't cause that debug message. What the message you are seeing comes from https://volttron.readthedocs.io/en/releases-8.x/driver-framework/actuator/actuator-agent.html?highlight=REQUEST_CONFLICTS_WITH_SELF#task-schedule-failures
So it appears that somewhere your schedules are overlapping. But the call to get_multiple_points should not have anything to do with this error.

PAHO MQTT 5 throwing exception when using same clientId in routes

When using paho-mqtt5:test more than once with same clientId then it throw exception Client not connected but if i will use different clientId for each to and from then it will work fine
2021-10-05 19:25:28,650 ERROR [org.apa.cam.pro.err.DefaultErrorHandler] (Camel (camel-1) thread #0 - timer://test) Failed delivery for (MessageId: 871E4623819E4FB-000000000000001B on ExchangeId: 871E4623819E4FB-000000000000001B). Exhausted after delivery attempt: 1 caught: Client is not connected (32104)
Message History (complete message history is disabled)
---------------------------------------------------------------------------------------------------------------------------------------
RouteId ProcessorId Processor Elapsed (ms)
[route1 ] [route1 ] [from[timer://test?period=1000] ] [ 0]
...
[route1 ] [to1 ] [paho:test ] [ 0]
Stacktrace
---------------------------------------------------------------------------------------------------------------------------------------
: Client is not connected (32104)
at org.eclipse.paho.mqttv5.client.internal.ExceptionHelper.createMqttException(ExceptionHelper.java:32)
at org.eclipse.paho.mqttv5.client.internal.ClientComms.sendNoWait(ClientComms.java:231)
at org.eclipse.paho.mqttv5.client.MqttAsyncClient.publish(MqttAsyncClient.java:1530)
at org.eclipse.paho.mqttv5.client.MqttClient.publish(MqttClient.java:564)
at org.apache.camel.component.paho.mqtt5.PahoMqtt5Producer.process(PahoMqtt5Producer.java:55)
at org.apache.camel.support.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:66)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:172)
at org.apache.camel.processor.errorhandler.RedeliveryErrorHandler$SimpleTask.run(RedeliveryErrorHandler.java:463)
at org.apache.camel.impl.engine.DefaultReactiveExecutor$Worker.schedule(DefaultReactiveExecutor.java:179)
at org.apache.camel.impl.engine.DefaultReactiveExecutor.scheduleMain(DefaultReactiveExecutor.java:64)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:184)
at org.apache.camel.impl.engine.CamelInternalProcessor.process(CamelInternalProcessor.java:398)
at org.apache.camel.component.timer.TimerConsumer.sendTimerExchange(TimerConsumer.java:210)
at org.apache.camel.component.timer.TimerConsumer$1.run(TimerConsumer.java:76)
at java.base/java.util.TimerThread.mainLoop(Timer.java:556)
at java.base/java.util.TimerThread.run(Timer.java:506)
Here is my code which is throwing exception
#ApplicationScoped
class TestRouter : RouteBuilder() {
override fun configure() {
val mqtt5Component = PahoMqtt5Component()
mqtt5Component.configuration = PahoMqtt5Configuration().apply {
brokerUrl = "tcp://192.168.99.101:1883"
clientId = "paho123"
isCleanStart = true
}
context.addComponent("paho-mqtt5", mqtt5Component)
from("timer:test?period=1000").setBody(constant("Testing timer2")).to("paho-mqtt5:test")
from("paho-mqtt5:test").process { e ->
val body = (e.`in`?.body as? ByteArray)?.let { String(it) }
println("test body 1 => $body")
}
}
}
#William, this is expected behavior
The message broker uses the client id to differentiate between clients so it can perform housekeeping for a client connection that is no longer used
In addition, a client may have a "Last Will and Testament" that the broker keeps track of
It is acceptable to append a random number to the end of your current 'clientId' since it is likely no one but you will care about this
If you have access to the individuals login, you could use that as well but you would still want to make each session unique in case they run multiple sessions
Maybe I don't understand what your problem is
Each client must have a unique Id
What are you observing that makes you think that it is creating multiple connections for a single client?
Is there a chance you are opening multiple windows and each is generating a different clientId?
This is a good way to diagnose issues by monitoring what the server is seeing
My paho-mqtt client (Javascript) is connecting as "webclient" and I append a randome number (webclient173) to identify this client
To troubleshoot, I would suggest you close all connections on the client and monitor the log of the MQTT process
When the monitor is in place, open a connection from a client that currently has no connections
This is an example connection to my Mosquitto log file
$ tail -f /var/log/mosquitto/mosquitto.log
1635169943: No will message specified.
1635169943: Sending CONNACK to webclient173 (0, 0)
1635169943: Received SUBSCRIBE from webclient173
1635169943: testtopic (QoS 0)
1635169943: Sending SUBACK to webclient173
1635170003: Received PINGREQ from webclient173
1635170003: Sending PINGRESP to webclient173
1635170003: Received PINGREQ from webclient173
1635170003: Sending PINGRESP to webclient173
What does your log show?

Google Cloud Run pubsub pull listener app fails to start

I'm testing pubsub "pull" subscriber on Cloud Run using just listener part of this sample java code (SubscribeAsyncExample...reworked slightly to fit in my SpringBoot app):
https://cloud.google.com/pubsub/docs/quickstart-client-libraries#java_1
It fails to startup during deploy...but while it's trying to start, it does pull items from the pubsub queue. Originally, I had an HTTP "push" receiver (a #RestController) on a different pubsub topic and that worked fine. Any suggestions? I'm new to Cloud Run. Thanks.
Deploying...
Creating Revision... Cloud Run error: Container failed to start. Failed to start and then listen on the port defined
by the PORT environment variable. Logs for this revision might contain more information....failed
Deployment failed
In logs:
2020-08-11 18:43:22.688 INFO 1 --- [ main] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 4606 ms
2020-08-11T18:43:25.287759Z Listening for messages on projects/ce-cxmo-dev/subscriptions/AndySubscriptionPull:
2020-08-11T18:43:25.351650801Z Container Sandbox: Unsupported syscall setsockopt(0x18,0x29,0x31,0x3eca02dfd974,0x4,0x28). It is very likely that you can safely ignore this message and that this is not the cause of any error you might be troubleshooting. Please, refer to https://gvisor.dev/c/linux/amd64/setsockopt for more information.
2020-08-11T18:43:25.351770555Z Container Sandbox: Unsupported syscall setsockopt(0x18,0x29,0x12,0x3eca02dfd97c,0x4,0x28). It is very likely that you can safely ignore this message and that this is not the cause of any error you might be troubleshooting. Please, refer to https://gvisor.dev/c/linux/amd64/setsockopt for more information.
2020-08-11 18:43:25.680 WARN 1 --- [ault-executor-0] i.g.n.s.i.n.u.internal.MacAddressUtil : Failed to find a usable hardware address from the network interfaces; using random bytes: ae:2c:fb:e7:92:9c:2b:24
2020-08-11T18:45:36.282714Z Id: 1421389098497572
2020-08-11T18:45:36.282763Z Data: We be pub-sub'n in pull mode2!!
Nothing else after this and the app stops running.
#Component
public class AndyTopicPullRecv {
public AndyTopicPullRecv()
{
subscribeAsyncExample("ce-cxmo-dev", "AndySubscriptionPull");
}
public static void subscribeAsyncExample(String projectId, String subscriptionId) {
ProjectSubscriptionName subscriptionName =
ProjectSubscriptionName.of(projectId, subscriptionId);
// Instantiate an asynchronous message receiver.
MessageReceiver receiver =
(PubsubMessage message, AckReplyConsumer consumer) -> {
// Handle incoming message, then ack the received message.
System.out.println("Id: " + message.getMessageId());
System.out.println("Data: " + message.getData().toStringUtf8());
consumer.ack();
};
Subscriber subscriber = null;
try {
subscriber = Subscriber.newBuilder(subscriptionName, receiver).build();
// Start the subscriber.
subscriber.startAsync().awaitRunning();
System.out.printf("Listening for messages on %s:\n", subscriptionName.toString());
// Allow the subscriber to run for 30s unless an unrecoverable error occurs.
// subscriber.awaitTerminated(30, TimeUnit.SECONDS);
subscriber.awaitTerminated();
System.out.printf("Async subscribe terminated on %s:\n", subscriptionName.toString());
// } catch (TimeoutException timeoutException) {
} catch (Exception e) {
// Shut down the subscriber after 30s. Stop receiving messages.
subscriber.stopAsync();
System.out.printf("Async subscriber exception: " + e);
}
}
}
Kolban question is very important!! With the shared code, I would like to say "No". The Cloud Run contract is clear:
Your service must answer to HTTP request. Out of request, you pay nothing and no CPU is dedicated to your instance (the instance is like a daemon when no request is processing)
Your service must be stateless (not your case here, I won't take time on this)
If you want to pull your PubSub subscription, create an endpoint in your code with a Rest controller. While you are processing this request, run your pull mechanism and process messages.
This endpoint can be called by Cloud Scheduler regularly to keep the process up.
Be careful, you have a max request processing timeout at 15 minutes (today, subject to change in a near future). So, you can't run your process more than 15 minutes. Make it resilient to fail and set your scheduler to call your service every 15 minutes

Mongoose opening multiple unwanted TCP sockets on reconnect

Wanting to test a mongoDB server up/down procedure connected to Node/Mongoose, we found out that Mongoose can sometimes open hundreds of TCP sockets (which is not necessary and potentially blocking for the user who is limited to a certain amount of sockets). This occurs in the following case and environment :
Node supervised with PM2 and MongoDB surevised with daemontools
At normal and clean startup :
$ netstat -alpet | grep mongo
tcp 0 0 *:27017 *:* LISTEN mongo 65910844 22930/mongod
tcp 0 0 localhost.localdomain:27017 localhost.localdomain:54595 ESTABLISHED mongo 6591110422930/mongod
The last "ESTABLISHED" line repeated 5 times since the option (poolSize: 5) is specified in Mongoose ("mongo" is the user running mongod under daemontools)
When we have the Node procedure :
mongoose.connection.on('disconnected', function () {
var options = {server: { auto_reconnect:true, poolSize: 5 ,socketOptions: { connectTimeoutMS: 5000 } }
}
console.log('Mongoose default connection disconnected ' + mongoose.connection.readyState);
mongoose.connect( dbURI, options );
});
and we bring down the MongoDB by daemontools (mongodbdaemon is a simple $mongod command) :
svc -d /service/mongodbdaemon
there is of course no mongod running in the system (tested by the netstat command ) and the web server pages called which are using mongoose announce what is normal :
{"name":"MongoError","message":"topology was destroyed"}
The problem occurrs at this stage. Since the time we bring down MongoDB, Mongoose accumulates all the connect() calls in the 'disconnected' event handler. This means that the longer we wait before bringing up MongoDB, the more TCP connections will be opened.
So bringing up MongoDB by
svc -u /service/mongodbdaemon
gives the following :
$ netstat -alpet | grep mongo | wc -l
850 'ESTABLISHED' TCP connections to mongod !
If we bring down again mongod, the hundreds of connections remain in the TIME_WAIT state until Linux cleans the socket pool.
Questions
Can we check if a MongoDB instance is available before connecting to it ?
Can we configure Mongoose not to accumulate reconnecting() tries every millisecond or so ?
Is there a buffer for pending connection operations (as there is for mongoose.insert[...]) that we can access or clean manually ?
Problem reproductible on a CentOS 6.7 / mongoDB 3.0.6 / mongoose 4.1.8 / node 4.0.0
Edit :
From the official mongoose site where I posted this question after posting it here, I received an answer : "using auto_reconnect : true, on the initial connect() operation (which is set by default) there is no reason to reconnect() in a disconnect event callback".
This is true and it works jute fine, but the question is now why does this happen and how to avoid it (it is serious enough on the Linux system level to be an issue in mongoose).
Thanks !

JGroups, TCP_NIO multiple messages sent to nowhere

Messages sent to ports I never specified in my configuration file.
this is my config:
[10-Jan-2011 11:02:22.917 GMT] ERROR org.jgroups.protocols.TCP_NIO - failed sending message to 192.168.50.41:8851 (116 bytes): java.lang.Exception: connection to 192.168.50.41:8851 could not be established
[10-Jan-2011 11:02:22.917 GMT] WARN org.jgroups.blocks.ConnectionTableNIO - Connection is not running, discarding message
Because you have a port_range of 2, so every discovery message is sent to all of the initial_hosts defined in TCPPING, plus port_range, e.g.
TCPPING.initial_hosts=A[1000],B[1000]
port_range=2
will send discovery requests to A:1000-1002, B:1000-1002.
TCPPING is used at startup for initial discovery and by MERGE2 (not in your stack)...

Resources