how to send a link of plant UML configuration - plantuml

I have an activity diagram created in plantUML:
#startuml
|#LightBlue|ILC Running Continously|
start
note
ILC grabs data from
VOLTTRON message bus
for HVAC system
and electric meter
end note
repeat :Calculate averaged power;
repeat
repeat while (Is Averaged Power above "Demand Limit"?) is (no)
->yes;
repeat: Calculated needed demand reduction;
:Calculate AHP Wieghts;
:Curtail selected loads;
note
Typical to VOLTTRON edge
device on interacting
with building systems via
protocol driver framework
end note
:Wait in Minutes "Control Time";
backward: Curtail more;
note
Devices already curtailed
saved in ILC agent memory
end note
repeat while (Is "Demand Limit" goal met?) is (no)
->yes;
backward: Manage Demand;
#enduml
Is it possible to send someone a link of the entire configuration which is NOT just the output picture link which is in the snip below but the entire configuration for someone else to modify?!

I don't know whether or not it is documented anywhere but from experience / trial error and observing the URLs with the plantuml webserver one can see that the URLs are:
uml: http://www.plantuml.com/plantuml/uml/SyfFKj2rKt3CoKnELR1Io4ZDoSa70000
png: http://www.plantuml.com/plantuml/png/SyfFKj2rKt3CoKnELR1Io4ZDoSa70000
svg: http://www.plantuml.com/plantuml/svg/SyfFKj2rKt3CoKnELR1Io4ZDoSa70000
ASCII art: http://www.plantuml.com/plantuml/txt/SyfFKj2rKt3CoKnELR1Io4ZDoSa70000
in other words the part uml / png / svg /txt changes according to the outcome.
Also when one tries a none existing part it is reverted back to uml

Related

How to check pub sub batch setting at publisher end really work as per configuration?

I am new to the GCP world. I have to check whether my batchSetting for publishing message to pub sub really work or not. This is the batch setting:
private BatchingSettings getBatchingSettings() {
long requestBytesThreshold = 10000L;
long messageCountBatchSize = 100L;
Duration publishDelayThreshold = Duration.ofMillis(2000);
BatchingSettings batchingSettings = BatchingSettings.newBuilder()
.setElementCountThreshold(messageCountBatchSize)
.setRequestByteThreshold(requestBytesThreshold)
.setDelayThreshold(publishDelayThreshold)
.build();
return batchingSettings;
}
I have to check whether pub sub publishes the message in batch of 100 or not.
Is there any way to check how many messages really published by per batch?
As it is explained in the documentation you can monitor Pub/Sub in Cloud monitoring. When you follow the link you will go to the Cloud Monitoring on your project.
In Metrics explorer its possible to create a metric of flowing configuration:
Recourse type: Cloud Pub/Sub Topic
Metric: Publish message operations
Group by: topic_id
Aggergator: sum
Minimum alignment period: 1 minutes
In "SHOW ADVANCED OPTIONS" set:
Aligner: sum
If you search such chart in some dashboard you can check the count of published massages there. Now just submit separate testing batch and wait for peak on the chart. When you hoover on the chart line you will see the number of massages in particular time period. Sometimes it will be decided into more parts, but in such small batch like 100 it should be no more than 2. So its enough to add 2 numbers.
Of course you can create more sophisticated metrics. This is just example.

get_multiple_points volttron RPC call

Any chance I could get a tip for proper way to build an agent that could do read multiple points from multiple devices on a BACnet system? I am viewing the actuator agent code trying learn how to make the proper rpc call.
So going through the agent development procedure with the agent creation wizard.
In the init I have this just hard coded at the moment:
def __init__(self, **kwargs):
super(Setteroccvav, self).__init__(**kwargs)
_log.debug("vip_identity: " + self.core.identity)
self.default_config = {}
self.agent_id = "dr_event_setpoint_adj_agent"
self.topic = "slipstream_internal/slipstream_hq/"
self.jci_zonetemp_string = "/ZN-T"
So the BACnet system in the building has JCI VAV boxes all with the same zone temperature sensor point self.jci_zonetemp_string and self.topic is how I pulled them into volttron/config store through BACnet discovery processes.
In my actuate point function (copied from CSV driver example) am I at all close for how to make the rpc call named reads using the get_multiple_points? Hoping to scrape the zone temperature sensor readings on BACnet device ID's 6,7,8,9,10 which are all the same VAV box controller with the same points/BAS program running.
def actuate_point(self):
"""
Request that the Actuator set a point on the CSV device
"""
# Create a start and end timestep to serve as the times we reserve to communicate with the CSV Device
_now = get_aware_utc_now()
str_now = format_timestamp(_now)
_end = _now + td(seconds=10)
str_end = format_timestamp(_end)
# Wrap the timestamps and device topic (used by the Actuator to identify the device) into an actuator request
schedule_request = [[self.ahu_topic, str_now, str_end]]
# Use a remote procedure call to ask the actuator to schedule us some time on the device
result = self.vip.rpc.call(
'platform.actuator', 'request_new_schedule', self.agent_id, 'my_test', 'HIGH', schedule_request).get(
timeout=4)
_log.info(f'*** [INFO] *** - SCHEDULED TIME ON ACTUATOR From "actuate_point" method sucess')
reads = publish_agent.vip.rpc.call(
'platform.actuator',
'get_multiple_points',
self.agent_id,
[(('self.topic'+'6', self.jci_zonetemp_string)),
(('self.topic'+'7', self.jci_zonetemp_string)),
(('self.topic'+'8', self.jci_zonetemp_string)),
(('self.topic'+'9', self.jci_zonetemp_string)),
(('self.topic'+'10', self.jci_zonetemp_string))]).get(timeout=10)
Any tips before I break something on the live system greatly appreciated :)
The basic form of an RPC call to the actuator is as follows:
# use the agent's VIP connection to make an RPC call to the actuator agent
result = self.vip.rpc.call('platform.actuator', <RPC exported function>, <args>).get(timeout=<seconds>)
Because we're working with devices, we need to know which devices we're interested in, and what their topics are. We also need to know which points on the devices that we're interested in.
device_map = {
'device1': '201201',
'device2': '201202',
'device3': '201203',
'device4': '201204',
}
building_topic = 'campus/building'
all_device_points = ['point1', 'point2', 'point3']
Getting points with the actuator requires a list of point topics, or device/point topic pairs.
# we only need one of the following:
point topics = []
for device in device_map.values():
for point in all_device_points:
point_topics.append('/'.join([building_topic, device, point]))
device_point_pairs = []
for device in device_map.values():
for point in all_device_points:
device_point_pairs.append(('/'.join([building_topic, device]),point,))
Now we send our RPC request to the actuator:
# can use instead device_point_pairs
point_results = self.vip.rpc.call('platform.actuator', 'get_multiple_points', point_topics).get(timeout=3)
maybe it's just my interpretation of your question, but it seems a little open-ended - so I shall respond in a similar vein - general (& I'll try to keep it short).
First, you need the list of info for targeting each device in-turn; i.e. it might consist of just a IP(v4) address (for the physical device) & the (logical) device's BOIN (BACnet Object Instance Number) - or if the request is been routed/forwarded on by/via a BACnet router/BACnet gateway then maybe also the DNET # & the DADR too.
Then you probably want - for each device/one at a time, to retrieve the first/0-element value of the device's Object-List property - in order to get the number of objects it contains, to allow you to know how many objects are available (including the logical device/device-type object) - that you need to retrieve from it/iterate over; NOTE: in the real world, as much as it's common for the device-type object to be the first one in the list, there's no guarantee it will always be the case.
As much as the BACnet standard started allowing for the retrieval of the Property-List (property) from each & every object, most equipment don't as-yet support it, so you might need your own idea of what properties (/at least the ones of interest to you) that each different object-type supports; probably at the very-very least know which ones/object-types support the Present-Value property & which ones don't.
One ideal would be to have the following mocked facets - as fakes for testing purposes instead of testing against a live/important device (- or at least consider testing against a noddy BACnet enabled Raspberry PI or the harware-based like):
a mock for your BACnet service
a mock for the BACnet communication stack
a mock for your device as a whole (- if you can't write your own one, then maybe even start with the YABE 'Room Control Simulator' as a starting point)
Hope this helps (in some way).

Dronekit Example Follow Me Python Script not working

I try to run an example script from dronekit. the code is looks like this :
import gps
import socket
import time
from droneapi.lib import VehicleMode, Location
def followme():
"""
followme - A DroneAPI example
This is a somewhat more 'meaty' example on how to use the DroneAPI. It uses the
python gps package to read positions from the GPS attached to your laptop an
every two seconds it sends a new goto command to the vehicle.
To use this example:
* Run mavproxy.py with the correct options to connect to your vehicle
* module load api
* api start <path-to-follow_me.py>
When you want to stop follow-me, either change vehicle modes from your RC
transmitter or type "api stop".
"""
try:
# First get an instance of the API endpoint (the connect via web case will be similar)
api = local_connect()
# Now get our vehicle (we assume the user is trying to control the first vehicle attached to the GCS)
v = api.get_vehicles()[0]
# Don't let the user try to fly while the board is still booting
if v.mode.name == "INITIALISING":
print "Vehicle still booting, try again later"
return
cmds = v.commands
is_guided = False # Have we sent at least one destination point?
# Use the python gps package to access the laptop GPS
gpsd = gps.gps(mode=gps.WATCH_ENABLE)
while not api.exit:
# This is necessary to read the GPS state from the laptop
gpsd.next()
if is_guided and v.mode.name != "GUIDED":
print "User has changed flight modes - aborting follow-me"
break
# Once we have a valid location (see gpsd documentation) we can start moving our vehicle around
if (gpsd.valid & gps.LATLON_SET) != 0:
altitude = 30 # in meters
dest = Location(gpsd.fix.latitude, gpsd.fix.longitude, altitude, is_relative=True)
print "Going to: %s" % dest
# A better implementation would only send new waypoints if the position had changed significantly
cmds.goto(dest)
is_guided = True
v.flush()
# Send a new target every two seconds
# For a complete implementation of follow me you'd want adjust this delay
time.sleep(2)
except socket.error:
print "Error: gpsd service does not seem to be running, plug in USB GPS or run run-fake-gps.sh"
followme()
I try to run it in my Raspberry with Raspbian OS, but i got an error message like this :
Error : gpsd service does not seem to be running, plug in USB GPS or run run-fake-gps.sh
I get a feeling that my raspberry is needed a gps kind of device to be attached before i can run this script, but i dont really know.
Please kindly tell me whats wrong with it..
the full path of instruction i got from here :
http://python.dronekit.io/1.5.0/examples/follow_me.html
As the example says:
[This example] will use a USB GPS attached to your laptop to have the vehicle follow you as you walk around a field.
Without a GPS device, the code doesn't know where you are so it would not be possible to implement any sort of "following" behavior. Before running the example, you would need to:
Acquire some sort of GPS device (I use one of these, but there are lots of alternatives).
Configure gpsd on your laptop to interface with the GPS device.

Move graph trained the GPU to be tested on the CPU

So I have this CNN which I train on the GPU. During the training, I regularly save checkpoint.
Later on, I want to have a small script that reads .meta file and the checkpoint and do some tests on a CPU. I use the following the code:
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
with sess.as_default():
with tf.device('/cpu:0'):
saver = tf.train.import_meta_graph('{}.meta'.format(model))
saver.restore(sess,model)
I keep getting this error which tell me that the saver is trying to put the operation on the GPU.
How can i change that?
Move all the ops to CPU using _set_device API. https://github.com/tensorflow/tensorflow/blob/r1.14/tensorflow/python/framework/ops.py#L2255
with tf.Session() as sess:
g = tf.get_default_graph()
ops = g.get_operations()
for op in ops:
op._set_device('/device:CPU:*')
Hacky work-around, open your graph definition file (ending with .pbtxt), and remove all lines starting with device:
For programmatic approach you can see how TensorFlow exporter does this with clear_devices although that uses regular Saver, not meta graph exporter

How to list SMIv1 MIBS with net-snmp MIB API in C/C++?

I want to display a list of various MIBS with net-snmp and show other informations related to the SNMP tree structure.
Now it turns out that my customer's SMIv1 MIBS does not show up in the listing, but are correctly loaded by net-snmp.
Sample net-snmp code goes through the MIB structure in memory and assumes that we have the SMIv2 bijection between a MIB and a MODULE-IDENTITY note in the tree. So when we find a MODULE-IDENTITY node, we find a MIB.
Does anybody knows what is the correct method to list SMIv1 MIBS with net-snmp ? (or any workaround ?)
// Read mibs and obtain memory structures
struct tree * head = read_all_mibs();
// Walk down the SNMP tree
for ( struct tree * tp = head; tp; tp = tp->next_peer )
{
// Module-indentity
if ( tp->type == TYPE_MODID )
{
// XXX We found a SMIv2 MIB, but SMIv1 MIBs have no MODULE-IDENTITY node
}
}
NB: I found a converter smidump (a command line tool, or as a web service at http://www.ibr.cs.tu-bs.de/projects/libsmi/tools/) but it does not adds a MODULE-IDENTITY node to the MIB.
Edit: Note that any tool that would convert an old SNMP MIB to a more recent one (SMIv2 style), could solve the problem. Any help in that particular direction ?
One suggestion could be, in the absence of MODULE-IDENTIFIER, to find the root OBJECT-IDENTIFIER of the MIB (sometimes the MIB will add node at many different and unrelated places so this would not work). With a root node I could show most of the tree related to that MIB.
It uses UDP datagrams. You can get sources of net-snmp or snif UDP traffic (looks like an easier way).
net-snmp is an agent (i.e. server) running snmp on device. What client (i.e. mib browser or command line tool such as snmp-get/walk etc.) are you using to query that? Is your client also loaded with the same MIB as server?
I presume you are using SNMPv1. Are you using the correct access community i.e. you are typing the correct password from client (MIB browser or command line snmp client) to autheticate with the SNMP Agent / server?
I suggest using a GUI based client (called SNMP Manager) for locating the problem. Such asMGSOftMIBBrowser
The trial version is free. And you see results of attempts such as failed password (community name for snmpv1).
Among several other possible problems could be:
SMIv1 is an old format. So you need to ensure the version of net-snmp you are using supports it.
If you are using SNMPv2, it is possible that you are authenticating with correct community. However, your community does not have read access for the mib you wish to see. SNMPv2 introduced concept of views in which you can allow a certain subset of oid tree to be visible to a particular community (user).
If it is a non standard mib, (i.e. not part of core snmp mibs), you should find it's complete OID (something like 1.3.4.1.2...) and first check in the GUI (MIB browser) if it exists or otherwise debug get request against the specific oid.
Also understand that a non standard mib needs to be loaded in both agent as well as client. Otherwise client won't know the details of a mib to be able to query requests on it's behalf.
The only solutions my colleague and I found to fix the problem, was to convert the "top-level" MIB(s) into a more SNMPv2-like structure. That is 1) import the type MODULE-IDENTITY, 2) replace the top-level node with a MODULE-IDENTITY declaration.
...
IMPORTS
MODULE-IDENTITY
FROM
SNMPv2-SMI
...
-- Removed top-level node
-- compaq OBJECT IDENTIFIER ::= { enterprises 232 }
-- Add a fake module-identity node
compaq MODULE-IDENTITY
LAST-UPDATED "200111120000Z"
ORGANIZATION "COMPAQ"
CONTACT-INFO
"why.still.using.snmpv1#compaq.com"
DESCRIPTION
"why does compaq still provide these mibs in 2013?"
REVISION "9407210000Z"
DESCRIPTION
"Normal fixed MIB module."
::= { enterprises 232 }
With this fix, the net-snmp library will show us a module-identity node for our MIB, just like with every other SNMPv2 mibs..

Resources