Alpakka KinesisSink : Can not push messages to Stream - akka-stream

I am trying to use the alpakka kinesis connector to send messages to a Kinesis Stream but I have no success with it. I tried the code below but nothing in my stream.
implicit val sys = ActorSystem()
implicit val mat = ActorMaterializer()
implicit val kinesisAsync: AmazonKinesisAsync = AmazonKinesisAsyncClientBuilder.defaultClient()
val debug = Flow[PutRecordsRequestEntry].map { reqEntry =>
println(reqEntry)
reqEntry
}
val entry = new PutRecordsRequestEntry()
.withData(ByteBuffer.wrap("Hello World".getBytes))
.withPartitionKey(Random.nextInt.toString)
Source.tick(1.second, 1.second, entry).to(KinesisSink("myStreamName", KinesisFlowSettings.defaultInstance)).run()
// 2) Source.tick(1.second, 1.second,entry).via(debug).to(KinesisSink("myStreamName", inesisFlowSettings.defaultInstance)).run()
Using a Sink.foreach(println) instead of KinesisSink prints out the PutRecordsRequestEntry every 1 second => EXPECTED
Using KinesisSink, the entry is generated only once.
What Am I doing wrong ?
I am checking my stream with a KinesisSource and reading is working ( tested with another stream)
Also the monitoring dashboard of AWS Kinesis doesnt show any PUT requests.
Note 1: I tried to enable the debug log of alpakka but with no effect
<logger name="akka.stream.alpakka.kinesis" level="DEBUG"/>
in my logback.xml + debug on root level

Some troubleshooting steps to consider below - I hope they help.
I suspect you're likely missing credentials and/or region configuration for your Kinesis client.
Kinesis Firehose
The Kinesis Producer Library (what Alpakka seems to be using) does not work with Kinesis Firehose. If you're trying to write to Firehose this isn't going to work.
Application Logging
You'll probably want to enable logging for the Kinesis Producer Library, not just in Alpakka itself. Relevant documentation is available here:
Configuring the Kinesis Producer Library
Configuration Defaults for Kinesis Producer Library
AWS Side Logging
AWS CloudTrail is automatically enabled out of the box for Kinesis streams, and by default AWS will keep 90 days of CloudTrail logs for you.
https://docs.aws.amazon.com/streams/latest/dev/logging-using-cloudtrail.html
You can use the CloudTrail logs to see the API calls your application is making to Kinesis on your behalf. There's usually a modest delay in requests showing up - but this will let you know if the request is failing due to insufficient IAM permissions or some other issue with your AWS resource configuration.
Check SDK Authentication
The Kinesis client will be using the DefaultAWSCredentialsProviderChain credentials provider to make requests to AWS.
You'll need to make sure you are providing valid AWS credentials with IAM rights to make those requests to Kinesis. If your code is running on AWS, the preferred way of giving your application credentials is using IAM Roles (specified at instance launch time).
You'll also need to specify the AWS Region when building the client in your code. Use your application.properties for configuring this, or if your application is part of a CloudFormation stack that lives in a single region - using the instance metadata service to retrieve the current region when your code is running on AWS.

The problem was an access denied / permission on the action on the stream.
I had to add the akka actor config for logging
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "DEBUG"
stdout-loglevel = "DEBUG"
logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
logger-startup-timeout = "30s"
}
to see debug lines and I actually run in debug and step in each stage.
It required permission "PutRecords" in the IAM role

Related

How commands in fiware work correctly? Can we use IoT Agent instead of Orion Context Broker as user?

We are following the theory that as users we must issue a command to the Context Broker to change the state of a device: Image 1
In our case, this command already works if we do it from the IoT Agent, but however, if we execute it from the Context Broker through a PATCH, it does not reach the IoT Agent.
Do you know why this could be happening?
Our Context Broker request is the following: Image 2
And finally, the request we make from the IoT Agent, which is the one that works, is this: Image 3
Another doubt that arises is, if the IoT Agent updates all the information in the Context Broker, why not execute the request from there instead of from the Contex Broker?
Your request to Context Broker seems to be ok. Sometimes, the lack of ?type in the request causes problems (see for instance this post) but it doesn't seem to be your case.
I'd suggest to check registrations at Orion. Registration is the mechanism in which the request forwarding from Orion to IOTAgent is based (more info in Orion documentation. IOTAgent should create and manage them, but something could be failing. You can get existing registrations in Orion with the GET /v2/registrations operation.
With regards:
Another doubt that arises is, if the IoT Agent updates all the information in the Context Broker, why not execute the request from there instead of from the Contex Broker?
The FIWARE data management model is context-centric. Thus, the Context Broker is the central piece of the architecture, to intermediate between context producer and context consumer. Commands are a kind of "context production", so it makes sense that Context Broker deals with commands. Note that the client issuing the command maybe doesn't even are able to access to the IOTAgent directly (they use to be "close" to the physical devices they manage and not typically open to direct client requests).

Snowflake creation of Notification integration on azure storage queue error

I was trying to create notification integration for azure storage , created storage queue. snowflake Subnet included and snowflake service principle has access to storage , Everything working fine with storage integration. now i am trying to setup notification integration and getting following error
SQL execution internal error: Processing aborted due to error 370001:1831050371;
create notification integration my_azure_int
enabled = true
type = queue
notification_provider = azure_storage_queue
azure_storage_queue_primary_uri = 'https://accountname.queue.core.windows.net/queuename'
azure_tenant_id = '123456-abcdef-abc-123-98765432';```
Error is not at all descriptive. please suggest some ideas.
Can you verify how many notification integrations you have created in your account by executing the below command?
show notification integrations;
This could be because you exceeded the maximum number of integrations/queues that can be created (10 in total).
If it's not the case, I'd suggest trying again later, or open a support ticket.
It is issue with snowflake for some reason notification integration is not allowed , but if you see the error is created with incident 37001. Snowflake monitor those incident and make changes as needed.
they have enabled notification integration after day, then its working fine.

Cloud Tasks client ignores retry configuration

Basically what the title says. The API and client docs state that a retry can be passed to create_task:
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
But this simply doesn't work. Passing a Retry instance does nothing and the queue-level settings are still used. For example:
from google.api_core.retry import Retry
from google.cloud.tasks_v2 import CloudTasksClient
client = CloudTasksClient()
retry = Retry(predicate=lambda _: False)
client.create_task('/foo', retry=retry)
This should create a task that is not retry. I've tried all sorts of different configurations and every time it just uses whatever settings are set on the queue.
You can pass a custom predicate to retry on different exceptions. There is no formal indication that this parameter prevents retrying. You may check the Retry page for details.
Google Cloud Support has confirmed that task-level retries are not currently supported. The documentation for this client library is incorrect. A feature request exists here https://issuetracker.google.com/issues/141314105.
Task-level retry parameters are available in the Google App Engine bundled service for task queuing, Task Queues. If your app is on GAE, which I'm guessing it is since your question is tagged with google-app-engine, you could switch from Cloud Tasks to GAE Task Queues.
Of course, if your app relies on something that is exclusive to Cloud Tasks like the beta HTTP endpoints, the bundled service won't work (see the list of new features, and don't worry about the "List Queues command" since you can always see that in the configuration you would use in the bundled service). Barring that, here are some things to consider before switching to Task Queues.
Considerations
Supplier preference - Google seems to be preferring Cloud Tasks. From the push queues migration guide intro: "Cloud Tasks is now the preferred way of working with App Engine push queues"
Lock in - even if your app is on GAE, moving your queue solution to the GAE bundled one increases your "lock in" to GAE hosting (i.e. it makes it even harder for you to leave GAE if you ever want to change where you run your app, because you'll lose your task queue solution and have to deal with that in addition to dealing with new hosting)
Queues by retry - the GAE Task Queues to Cloud Tasks migration guide section Retrying failed tasks suggests creating a dedicated queue for each set of retry parameters, and then enqueuing tasks accordingly. This might be a suitable way to continue using Cloud Tasks

Java Google AppEngine Managed VMs: What logs are obtainable through the Logging API?

I like that I can use the Logs API (described here: https://cloud.google.com/appengine/docs/java/logs/) to programatically access and display app & request logs as I see fit--it's great.
Now that I'm using Managed VMs on AppEngine you can see on the Admin Console Logs Viewer that there are a ton of additional logs--including in my case a custom log which I found I could include in the viewer (decribed here: https://cloud.google.com/appengine/docs/managed-vms/custom-runtimes#logging).
My question is: Is there any way I can use the Logs API (or other pipelines already built?) to access these logs? My Managed VM module includes several components which could produce logs that I want to view:
App logs -- I can get these! No problem here.
Custom log files created by background processes I kick off in _ah/start (like "my_custom_1.log" in the screenshot)
STDERR & STDOUT from my background processes
Relevant Managed VM logs (e.g. for when an instance was restarted due to bad health... other system events like normal restarts?)
Basically I want "the total picture" at the instance level. Anyone tried to tame Managed VMs in this way with success? I'm not looking forward to rolling my own solution. And I wouldn't even know where to start on the problem of capturing STDERR and STDOUT. Any help appreciated.
There is a difference between App Engine logging and Google Cloud logging. Some of the Managed VM logs go to both, but much of it only goes to cloud logging.
Until recently there was not an API to read Cloud logs, only to write them. However, there is a new v2 beta API: https://cloud.google.com/logging/docs/api/introduction_v2
To do things at an instance level, entries in Cloud logging should have metadata set to denote which VM they came from. Both of these values seem to vary on logs from my VMs:
compute.googleapis.com/resource_name
compute.googleapis.com/resource_id

Silverlight logging of errors

What is the correct way of handling errors on the client side of Silverlight applications? I tried building a service endpoint that would receive details about the error and then would write that string to the database. The problem is, the error's text exceeds the maximum byte length, so I can't send the exception message and stacktrace. What would be a better way of handling errors that end up at the client side?
Try handling faults...I used this pattern from MSDN
http://msdn.microsoft.com/en-us/library/dd470096%28VS.96%29.aspx
If you find you message is too long to send to your logging web service then try setting your binding properties such as maxBufferSize and maxStringContentLength to appropriately large values. They default to 16KB, personally i have set mine to 2147483647 (which is int.MaxValue).
Obviously you cannot send the raw exception straight to the logging web service (exceptions are not serializable), what i did was write a function that takes an exception and walks it, translating it into a WCF friendly structure that can then be passed to my logging end point. Of course you need to ensure that if this fails you have a backup plan, like maybe logging it to isolated storage if you are running in browser, or logging it to the user's file system if you are running elevated OOB.
You should not be considering logging of error messages via a service. What if the error that you want to log is related to the service itself? Maybe the server that hosts all dependant services (including the error logging service) is not reachable or down. client errors should be logged on the client side and periodically flushed to the server when connectivity to service is available.
Thats what I would do...
Take a look at the new Silverlight Integration Pack for Enterprise Library from Microsoft patterns & practices. It provides plumbing for both logging (client-side and via a remote service) and exception handling with flexible configuration of policies via config or programmatically.

Resources