I am integrating ROS2 with an external tool and the message exchange needs to be synchronised.
How can i know the frequency of the published message in ROS2 ?
In ROS2 it is possible to print the frequency of a published topic into the console using
ros2 topic hz /topicname
This will return the average publishrate in Hertz.
Documentation: https://docs.ros.org/en/foxy/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Topics/Understanding-ROS2-Topics.html
Related
I am trying to find a true open source alternative to our SAP PI ESB that is going to be discontinued in our company.
We are using it for integrating several systems. We have around 100 interface SOAP-SOAP (both synchronous and asynchronous), file (FTP) => SOAP and SOAP => file.
I have read many blogs and watched quite some videos about open source ESBs. But I think I still miss the whole picture. From what what i understood I can you Apache Camel as a core integration engine with appropriate adapters for delivering and transforming the messages, then JBoss Fuse (or perhaps OpenShift) to orchestrate the individual Camel "applications".
I am still missing last piece of the puzzele. I was not able to find any ready to use solution for message monitoring. If I search for monitoring i always find monitoring of the system (procesor load, memory, number of processes, number of messages...).
But I am looking for a message monitoring where I can filter the messages by date/time range, interface name, sender/receiver, status (completed, error, in queue...) and look into the message payload (XML or file) before and after the transformation. Fulltext search by message content and email notifications on errors would be also great.
Can anyone give me hint what to look for?
Wondering if anyone has a full working example of how to make a ZCL endpoint using Digi's Xbee ANSI C Library ?
The samples directory in that repo has some things, the commission server sample is helpful but I'd love to see an example of the library actually being used for something real.
What I'm trying to make here is a simple sensor to interface with an existing Zigbee network (the coordinator being zigbee2mqtt with a cc2531 in my case) to report readings to home assistant.
I've seen mentions of a "xbee custom endpoint" example on the Digi forum, but I couldn't find that example, it sounds like that'd be exactly what I need.
Thanks
The Commissioning Client and Server samples are overkill for just getting started, but they are used for "something real". The Commissioning Cluster is a part of the Zigbee spec.
You might want to look at zcl_comm_startup_attributes and zcl_comm_startup_attribute_tree in src/zigbee/zcl_commissioning.c to see how you can set up an attribute tree for your cluster.
Perhaps look at include/zigbee/zcl_basic_attributes.h and samples/common/_zigbee_walker.c on how to set up the endpoint table with a Basic cluster and its attributes. The Zigbee Walker sample shows how to use ZDO/ZDP queries to enumerate endpoints, and then ZCL queries to enumerate clusters and attributes. You can use that sample to validate the endpoint/cluster/attribute table that you've set up in a particular program.
You might want to spend some time reading through the Zigbee Cluster Library specification to understand the concept of endpoints, clusters and attributes, which may help you to understand the tables you need to set up in your program to implement them.
I would like to know if there is an open issue or some related work (publication, platform) that is currently working in an Anomaly Detection approach with Apache Flink for a stream data scenario.
So far I just found the one of Mux by Scott Kidder from 2017. Is there something more recent or in working progress? Thanks!
EDIT
I also found the flink-htm using the HTM.java framework
I got the help of Flink community with the email list and this are some ideas that they provide me (thanks to Marta):
Flink in these presentations from Microsoft 1 and Salesforce Salesforce 2 at Flink Forward. This blogpost 3 also describes how to build that kind of application using Kinesis Data Analytics (based on Flink).
I am getting the following error message and hence metrics not collecting for SQL Server when using new relic
I am a little confused as the basic server monitor for this box is working, and it looks like metrics are being collected based on previous messages in the log file.
Any thoughts or solutions are much appreciated, Ive been waiting for days for help from newrelic support, but it hasn't been forthcoming.
>2013-08-21 10:25:09,974 [8] ERROR - Error sending data to connector
>System.Net.WebException: The operation has timed out
>at System.Net.HttpWebRequest.GetRequestStream(TransportContext& context)
>at System.Net.HttpWebRequest.GetRequestStream()
>at NewRelic.Platform.Binding.DotNET.Request.Send()
>at NewRelic.Microsoft.SqlServer.Plugin.Communication.SqlRequest.SendData()
>at NewRelic.Microsoft.SqlServer.Plugin.MetricCollector.SendComponentDataToCollector(ISqlEndpoint >endpoint)
>2013-08-21 10:25:09,975 [SqlPoller] INFO - Recorded 186 metrics
>2013-08-21 10:25:09,975 [SqlPoller] DEBUG - SqlPoller: Sleeping for 00:01:00
While most New Relic Platform plugins are supported by the plugin author, this particular plugin is supported by New Relic. Please follow up with our support team so we can help you achieve a resolution.
You can reach New Relic support here: http://support.newrelic.com/
Response from new relic around this issue is regarding the use of proxies in our network, and the lack of proxy configuration allowed by the sql server plugin at this stage.
I`m running several python (2.7) applications and constantly hit one problem: log search (from dashboard, admin console) is not reliable. it is fine when i searching for recent log entries (they are normally found ok), but after some period (one day for instance) its not possible to find same record with same search query again. just "no results". admin console shows that i have 1 gig of logs spanning 10-12 days, so old record should be here to find, retention/log size limits is not a reason for this.
Specifically i have "cron" request that write stats to log every day (it`s enough for me) and searching for this request always gives me the last entry, not entry-per-day-of-span-period as expected.
Is it expected behaviour (i do not see clear statements about log storage behaviour in docs, for example) or there is something to tune? For example, will it help to log less per request? or may be there is advanced use of query language.
Please advise.
This is a known issue that has already been reported on googleappengine issue tracker.
As an alternative you can consider reading your application logs programmatically using the Log Service API to ingest them in BigQuery, or build your own search index.
Google App Engine Developer Relations delivered a codelab at Google I/O 2012 about App Engine logs ingestion into Big Query.
And Streak released a tool called called Mache and a chrome extension to automate this use case.