Multiple outputs streaming analytics - analytics

I'm using Stream Analytics with multiple outputs (sub queries) but we don't see any output and nu error message are stated in the logs. We are using IoT Hub as an input.

In addition to that: To prevent congestion from IoT Hub to ASA you should add consumer groups (in Messaging settings in you IoT Hub settings) and map the inputs of your ASA job to these separate consumer group.
Let us know if you need help with that.

Related

Salesforce platform event duplicate events with ComeTd Client

I have built Salesforce platform event client using CometD java and It's similar to example EMP-Connector provided by forcedotcom.
I installed this client on OpenShift cloud and my app is running with 2 pods. Problem I am facing is that since there are two pods and each are running the same docker image thus both are getting the same of event. That means duplication event.
Per my understanding, Salesforce platform event should behave like Kafka subscriber.
I am unable to find a solution about how to avoid getting the same copy of events. Any suggestions here would be the great help.
Note: As of now I am able to create client side solution which drop duplicate copy of event. which is not an optimal solution.
I have to run my app atleast with 2 pods. That's limit I have on my cloud.
This is expected / by design. In CometD, when a message is published on a broadcast channel all subscribers listening on this channel will receive a copy of this message. The broadcast channel behaves like a messaging topic where one sender wants to send the same info to multiple recipients. There are other types of channels in CometD with different semantics. The broadcast channel and one-to-many message semantics is what you get with platform events available via CometD in Salesforce.
In your case it sounds like you have multiple subscribers, thus what you're seeing is expected. You can deduplicate the message stream on the client side as you have done or you can change your architecture so that you have a single subscription.

How to do mobile application monitoring?

I want to be able to monitor issues from mobile application like from backend micro-services.
I'm not aware of any real time monitoring for mobile applications outside.
I think that it can really help to monitor mobile application and report errors from the application and not only from the backend services. Sometimes the application is connected to multiple services and has its own logic so it seems like one place to catch all errors and wrong behaviour.
Are there any tools outside?
If for example I'll use mParticle/Segment as Hub to report events, can I connect it to Graphite somehow which is push-based monitoring ? Maybe through SQS / AWS Lambda ?
https://www.mparticle.com/integrations
In theory, yes it's possible to send data to Graphite using a combination of SQS + Lambda. I've tested this by writing some metric data to SQS and used a node js lambda function to read and forward that data to our carbon endpoint at https://hostedgraphite.com via UDP per our language guide here
Having said that, there are some further considerations that we must take in order to ensure this works: the main one being data format. Graphite/Carbon require data in a specific format, something that mParticle might not support directly. As such, you will need an AWS Lambda that formats the messages and then forwards to Graphite (or optionally, to another SQS queue where another Lambda reads and forwards that data to Graphite).

Alpakka KinesisSink : Can not push messages to Stream

I am trying to use the alpakka kinesis connector to send messages to a Kinesis Stream but I have no success with it. I tried the code below but nothing in my stream.
implicit val sys = ActorSystem()
implicit val mat = ActorMaterializer()
implicit val kinesisAsync: AmazonKinesisAsync = AmazonKinesisAsyncClientBuilder.defaultClient()
val debug = Flow[PutRecordsRequestEntry].map { reqEntry =>
println(reqEntry)
reqEntry
}
val entry = new PutRecordsRequestEntry()
.withData(ByteBuffer.wrap("Hello World".getBytes))
.withPartitionKey(Random.nextInt.toString)
Source.tick(1.second, 1.second, entry).to(KinesisSink("myStreamName", KinesisFlowSettings.defaultInstance)).run()
// 2) Source.tick(1.second, 1.second,entry).via(debug).to(KinesisSink("myStreamName", inesisFlowSettings.defaultInstance)).run()
Using a Sink.foreach(println) instead of KinesisSink prints out the PutRecordsRequestEntry every 1 second => EXPECTED
Using KinesisSink, the entry is generated only once.
What Am I doing wrong ?
I am checking my stream with a KinesisSource and reading is working ( tested with another stream)
Also the monitoring dashboard of AWS Kinesis doesnt show any PUT requests.
Note 1: I tried to enable the debug log of alpakka but with no effect
<logger name="akka.stream.alpakka.kinesis" level="DEBUG"/>
in my logback.xml + debug on root level
Some troubleshooting steps to consider below - I hope they help.
I suspect you're likely missing credentials and/or region configuration for your Kinesis client.
Kinesis Firehose
The Kinesis Producer Library (what Alpakka seems to be using) does not work with Kinesis Firehose. If you're trying to write to Firehose this isn't going to work.
Application Logging
You'll probably want to enable logging for the Kinesis Producer Library, not just in Alpakka itself. Relevant documentation is available here:
Configuring the Kinesis Producer Library
Configuration Defaults for Kinesis Producer Library
AWS Side Logging
AWS CloudTrail is automatically enabled out of the box for Kinesis streams, and by default AWS will keep 90 days of CloudTrail logs for you.
https://docs.aws.amazon.com/streams/latest/dev/logging-using-cloudtrail.html
You can use the CloudTrail logs to see the API calls your application is making to Kinesis on your behalf. There's usually a modest delay in requests showing up - but this will let you know if the request is failing due to insufficient IAM permissions or some other issue with your AWS resource configuration.
Check SDK Authentication
The Kinesis client will be using the DefaultAWSCredentialsProviderChain credentials provider to make requests to AWS.
You'll need to make sure you are providing valid AWS credentials with IAM rights to make those requests to Kinesis. If your code is running on AWS, the preferred way of giving your application credentials is using IAM Roles (specified at instance launch time).
You'll also need to specify the AWS Region when building the client in your code. Use your application.properties for configuring this, or if your application is part of a CloudFormation stack that lives in a single region - using the instance metadata service to retrieve the current region when your code is running on AWS.
The problem was an access denied / permission on the action on the stream.
I had to add the akka actor config for logging
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "DEBUG"
stdout-loglevel = "DEBUG"
logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
logger-startup-timeout = "30s"
}
to see debug lines and I actually run in debug and step in each stage.
It required permission "PutRecords" in the IAM role

What is the best way to determine the connection state of an AWS IoT device?

How can I determine if a particular AWS IoT device is currently online? I could send an MQTT message and make the device answer it. But is there some implicit way on seeing if a device is online/connected?
You can also use Fleet Indexing with enabled Connectivity Indexing
https://docs.aws.amazon.com/iot/latest/developerguide/managing-index.html and do search for your deviceId. In results you can check connectivity. Also you can search for all connected devices by using search with query connectivity.connected:true
Ok, there is a dedicated internal MQTT topic for it. Subscribe to $aws/events/presence/# to get presence events for all your devices.

User Destinations with Stomp, Spring Websockets, an External Broker with an External Consumer

My Question centers around this slide from one of Rossen Stoyanchev webinars.
When using a simpleBroker I can send messages to individual users with the /user/** destination format that is picked up in UserDestination and converted. I can also use it to send to a specific session, or all sessions of a specific user.
This is also possible when using an External Broker like ActiveMQ or RabbitMQ as long as the sender is also able to use /user/** or its helper annotations #SentToUser etc.
But, if I am not processing these messages locally and I have another consumer connected to the External Message Broker (Apache Camel for example) How do handle User specific messages and also reply at a user and session level?
If the other consumer is in the same JVM you can have the "brokerMessagingTemplate" bean injected and use it to send messages to user-prefixed destinations.
For 4.2 we plan to support user destinations in a deployment with multiple web application servers connected to an External broker (see https://jira.spring.io/browse/SPR-11620). So if the other consumer is in a different JVM, then you could declare the #EnableWebSocketMessageBroker setup in that JVM as well or you could simple extend AbstractMessageBrokerConfiguration if you don't need the WebSocket client bits.
HTH

Resources