Spring Cloud: How to read hystrix metrics data programmatically? - hystrix

My application uses hystrix as circuit breaker. I want to export hystrix metrics data to InfluxDB(or other storage service). I didn't find any docuemnts taking about how to read these data.
Thanks!

I found this blog very useful regarding this subject. http://www.nurkiewicz.com/2015/02/storing-months-of-historical-metrics.html .
This talks about exporting data to graphite, I am sure it can be extended to InfluxDB as well.

If you want to custom write hystrix metrics data, you can see here

Related

What is the recommended way to create a Custom Sink for AWS Sagemaker Feature Store in Apache Flink?

I want to create a Custom Apache Flink Sink to AWS Sagemaker Feature store, but there is no documentation for how to create custom sinks on Flink's website. There are also multiple base classes that I can potentially extend (e.g. AsyncSinkBase, RichSinkFunction), so I'm not sure which to use.
I am looking for guidelines regarding how to implement a custom sink (both in general and for my specific use-case). For my specific use-case: Sagemaker Feature Store has a synchronous client with a putRecord call to send records to AWS Sagemaker FS, so I am ideally looking for a way to create a custom sink that would work well with this client. Note: I require at at least once processing guarantees, as Sagemaker FS is DynamoDB (a key-value store) under the hood.
Java Client: https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/sagemakerfeaturestoreruntime/AmazonSageMakerFeatureStoreRuntime.html
Example of the putRecord call using the Python client: https://github.com/aws-samples/amazon-sagemaker-feature-store-streaming-aggregation/blob/main/src/lambda/StreamingIngestAggFeatures/lambda_function.py#L31
What I've Found so Far
Some older articles which say to use org.apache.flink.streaming.api.functions.sink.RichSinkFunction and SinkFunction
Some connectors using classes in org.apache.flink.connector.base.sink.writer (e.g. AsyncSinkWriter, AsyncSinkBase)
This section of the Flink docs says to use the SourceReaderBase from org.apache.flink.connector.base.source.reader when creating custom sources; SourceBaseReader seems to be the equivalent source to the sink classes in the bullet above
Any help/guidance/insights are much appreciated, thanks.
How about extending RichAsyncFunction ?
you can find similar example here - https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/operators/asyncio/#async-io-api

Push Metrics from Camel to External Service

I am trying to use camel to push metrics to datadog/cloudwatch. I explored meter and micrometer component, but there is no full-on example on how to actually push metrics data to some external service. I have explored all available examples, and nothing seems give me a proper big picture. . The metrics I require are the stats for each route which is running. Any resource/example which points me there would be really helpful.
hawtio is what you need. How many times the route has run .ram consumption, how many classes have been loaded etc . Everything that comes to mind . It has a very nice interface. https://hawt.io/
Publishing metrics to datadog is actually documented: https://micrometer.io/docs/registry/datadog
For CloudWatch, see this PR: https://github.com/micrometer-metrics/micrometer-docs/pull/131
If you are using spring-boot, you will get properties support for this.

Apache Flink in Kubernetes

Could anyone please let me know how I can setup Flink in my Serverless platform (FaaS) to perform event driven operations?
I looked at Flink functions and it seems to be promising. Could anyone clarify on the below?
What I need to install in my FaaS env. to trigger the flink function when an event (file changes in my s3 bucket) occurs?
I don't have big data platform and so planning to use flink in my serverless/kubernetes env.
Thanks in advance!!
To use StateFun You would generally need:
An Ingress that would trigger the functions.
The actual code that would react to your events (the stateful function) Dockerized
A way to lunch your application
Specifically:
Every stateful function application starts with an Ingress, basically that is a funnel of events that your functions can react to.
In your case, you can use Amazon Kinesis as your Ingress, and make sure that your S3 events will end up there.
The next thing that you would need, is to get yourself familiar with a stateful function SDK, either in Java or in Python and write the logic that deals with the incoming events. The result of that stage would be a Docker image.
Then, you need to lunch the image obtained at (2) and for that you can use Kubernetes (you don't have to).
There are Helm charts provided for your convenience and a simple utility to generate the necessary k8s resources.

What is the benefit of pyramid_celery if I am using a standard celeryconfig?

So I have a pyramid app which stores data in zodb (Substanced) and also creates a solr index for a speedy search of that data. Some of the solr indexing takes a while so I am wanting to make the solr indexing asynchronous. I am going to use rabbitmq and celery.
Do I benefit from using pyramid_celery? I don't want to use the ini file to store the celery config and there are no scheduled tasks so no celery beats. This is small scale and all of the processes/tasks will run on one machine.
Thanks
OK, so I am answering my own question. I asked this on the pylons google group and the response from the author of pyramid_celery was
Absolutely nothing. pyramid_celery is specifically for sharing your ini configuration / app configuration with your celery workers. If you don't have a need to share those things you have no need for pyramid_celery :)
I will also look at Mikko's option.

How can I extract metrics?

The stock reports provided by Atlassian do not cover what I would like to investigate
Without buying 3rd party controls, how can I extract data from Jira to play around with and create my own reports and analysis?
You can take a look on JIRA rest api: https://docs.atlassian.com/jira/REST/latest/
You may want to use this: https://bitbucket.org/kaszaq/howfastyouaregoing/ - library which greatly simplifies creation of any metrics based on data from Jira which is cached locally so it does not need to connect and pull issues from Jira each time.

Resources