Providing Virtual Distance Sensor Measurements to Autopilot - dronekit-python

I am trying to provide virtual rangefinder module measurements to the autopilot.
So far i found the following links:
https://mavlink.io/en/messages/common.html#DISTANCE_SENSOR
https://mavlink.io/en/messages/common.html#MAV_SENSOR_ORIENTATION
http://ardupilot.org/dev/docs/code-overview-object-avoidance.html
How can i create a MAV message using these informations?
I want to send informations from 8 virtual rangefinder (MAV_SENSOR_ORIENTATIONs are 0, 45, 90, 135, 180, 225, 370, 315 respectively).
Is there a way to create MAV messages and send to the autopilot from outer source?
I am using SITL to create a vehicle and connecting to this vehicle using python dronekit module.
Thanks for answers...

Related

Flink dynamic partition in s3 with Datastream API

I am writing a flink datastream pipeline in Java where sink is configured to write output to s3. However, I am trying to understand if there is any way to dynamically partition s3 output into directories based on values from the streaming data itself. For example:
Let's say we have 2 type of departments for class 10th i.e. science and maths. Input datastream has fields
class, department, student_name, marks
10, science, abc, 65
10, maths, abc, 71
10, science, bcd, 59
So the pipeline should produce data in following directory structure:
s3://<bucket_name>/class=10/department=science/part-xxx
s3://<bucket_name>/class=10/department=maths/part-xxx
Please note that I know this is possible with Table API, but I am looking for this alternative with Datastream API. Closest option seems like DateTimeBucketAssigner but this will not work my usecase. Any thoughts ?

How split annotated dataset into sentence

i have dataset annoted in spacy 2 format like below
td = ["Where is Shaka Khan lived.I Live In London.", {"entities": [(9, 19, "FRIENDS"),(32, 37, "JILLA")]}]
my datasets has sequence length greater than 512 and trying to migrate to hugging face so would like to split document into sentences at same time need to update the tagging also is there any tools available for that my expected result should be like below
td = [["Where is Shaka Khan lived.", {"entities": [(9, 19, "FRIENDS")]}],["I Live In London.", {"entities": [(10, 16, "JILLA")]}],]
Why to do it with spacy? write a small parser that split it, and then run spacy on the already split sentences , it will give you the same result you want

volttron scheduling actuator agent with CRON

For my volttron agent that I used the agent creation wizard to develop, can I get a tip on an error related to this , 'Timezone offset does not match system offset: -18000 != 0. Please, check your config files.'
When testing my script with the from volttron.platform.scheduling import cron feature I noticed the timezone/computer time was way off on my edge device so I reset the time zone with this tutorial which I am thinking definitely screwed things up.
ERROR: volttron.platform.jsonrpc.RemoteError: builtins.ValueError('Timezone offset does not match system offset: -18000 != 0. Please, check your config files.')
Whether or not this makes a difference this edge device does use the fowarding agent to push the data to a central VOLTTRON instance.
2021-05-14 12:45:00,007 (actuatoragent-1.0 313466) volttron.platform.vip.agent.subsystems.rpc ERROR: unhandled exception in JSON-RPC method 'request_new_schedule':
Traceback (most recent call last):
File "/var/lib/volttron/volttron/platform/vip/agent/subsystems/rpc.py", line 158, in method
return method(*args, **kwargs)
File "/home/volttron/.volttron/agents/8f4ee1c0-74cb-4070-8a8c-57bf9bea8a71/actuatoragent-1.0/actuator/agent.py", line 1343, in request_new_schedule
return self._request_new_schedule(rpc_peer, task_id, priority, requests, publish_result=False)
File "/home/volttron/.volttron/agents/8f4ee1c0-74cb-4070-8a8c-57bf9bea8a71/actuatoragent-1.0/actuator/agent.py", line 1351, in _request_new_schedule
local_tz = get_localzone()
File "/var/lib/volttron/env/lib/python3.8/site-packages/tzlocal/unix.py", line 165, in get_localzone
_cache_tz = _get_localzone()
File "/var/lib/volttron/env/lib/python3.8/site-packages/tzlocal/unix.py", line 90, in _get_localzone
utils.assert_tz_offset(tz)
File "/var/lib/volttron/env/lib/python3.8/site-packages/tzlocal/utils.py", line 46, in assert_tz_offset
raise ValueError(msg)
ValueError: Timezone offset does not match system offset: -18000 != 0. Please, check your config files.
This is my raise_setpoints_up function below that is alot like the CSV driver agent code.
def raise_setpoints_up(self):
_log.info(f'*** [Setter Agent INFO] *** - STARTING raise_setpoints_up function!')
schedule_request = []
# create start and end timestamps
_now = get_aware_utc_now()
str_start = format_timestamp(_now)
_end = _now + td(seconds=10)
str_end = format_timestamp(_end)
# wrap the topic and timestamps up in a list and add it to the schedules list
for device in self.jci_device_map.values():
topic = '/'.join([self.building_topic, device])
schedule_request.append([topic, str_start, str_end])
# send the request to the actuator
result = self.vip.rpc.call('platform.actuator', 'request_new_schedule', self.core.identity, 'my_schedule', 'HIGH', schedule_request).get(timeout=4)
_log.info(f'*** [Setter Agent INFO] *** - actuator agent scheduled sucess!')
Thanks for any tips
I suspect the time configured by tzdata is different than the timezone configured by the system since you changed this manually. Give this a try:
sudo dpkg-reconfigure tzdata
use sudo dpkg-reconfigure tzdata to change back to UTC time

Data Studio Community Connectors: Combine time and non-time based metrics

I'm building a connector that connects to an API that offers endpoints for both time series-based metrics (such as 'number of users per day'), as well as static metrics (such as 'project status').
If I am to build a single connector, I would have to insert the static metrics within the time series values, right?
So, if my schema is
[{name=day}, {name=users}, {name=status}]
then my values would look like
['20181101', 150, '85%']
['20181102', 125, '85%']
['20181103', 134, '85%']
['20181104', 185, '85%']
['20181105', 111, '85%']
['20181106', 123, '85%']
since the 'status' field is not time-dependent.
While this seems to work, this looks pretty inefficient. Is there anything I'm missing, or should I build a separate connector for the static metrics endpoints?
Thanks!
It sounds like you are trying to merge two independent tables, from your description. If you have a time series data with dates, and a separate series of project status without dates, then split your connector to return two different series.
The method getSchema passes a request object, since it is beyond the getConfig() call. That schema can thus be split depending on the parameter. E.g.:
function getSchema(request){
switch(request.configParams.myTableOption){
case 'tableOption1':
return mySchemaObject.tableOption1;
case 'tableOption1':
return mySchemaObject.tableOption1;
}
}
Super simplified, of course, but that should provide a much more flexible connector, which can return different table types. You also have to similarly split getData() to return the right data, but that same configParam carries through subsequent requests for that connector, so you can rely on it to do so there as well.

Grafana: Get the max of max for singlestat

I have been searching the web looking for an answer to this, but I cannot seem to find an answer or figure it out.
I am new to Grafana and I am trying to setup a singestat gauge. I have
16 servers (HPG6-01 to HPG6-16)
Each server has 8 cores
Each server has Nagios plugin that sends the maximum temperature across all cores. For instance Core 0-8 temperature for HPG6-01 is T = [33,34,55,45,37,38,46,33], Nagios plugin returns Max-Temp = max(T) = 55
Performance data is sent to Shinken, which has a plugin for Graphite.
I can plot the current Max-Temp in Grafana which is easy (see the line graph below). But I also want the maximum of Max-Temp across the 16 servers to be displayed as a single stat. For example,
MT = [34, 56, 60, ...] #the Max-Temp for each of the servers
singlestat = max(MT)
The metrics for the single stat is shown in the screenshot below:
The options for the single stat is shown below:
Any ideas on how I can do that? I tried consolidateBy(max) and I get an error "Metric query returns 16 series. Single Stat Panel expects a single series." because it returns a series instead of a scalar.
I like to use the highestMax function for this.
highestMax(HPG6*.shinken.Core_Temp.Max-Temp, n)
This function will only show n number of series that have the highest values; for your use-case, I would just use 1, to get the absolute maximum value across all series.
This will show the series with the absolute maximum value across the selected time frame. To see the highest current value, use just that,
highestCurrent

Resources