Snowflake creation of Notification integration on azure storage queue error - snowflake-cloud-data-platform

I was trying to create notification integration for azure storage , created storage queue. snowflake Subnet included and snowflake service principle has access to storage , Everything working fine with storage integration. now i am trying to setup notification integration and getting following error
SQL execution internal error: Processing aborted due to error 370001:1831050371;
create notification integration my_azure_int
enabled = true
type = queue
notification_provider = azure_storage_queue
azure_storage_queue_primary_uri = 'https://accountname.queue.core.windows.net/queuename'
azure_tenant_id = '123456-abcdef-abc-123-98765432';```
Error is not at all descriptive. please suggest some ideas.

Can you verify how many notification integrations you have created in your account by executing the below command?
show notification integrations;
This could be because you exceeded the maximum number of integrations/queues that can be created (10 in total).
If it's not the case, I'd suggest trying again later, or open a support ticket.

It is issue with snowflake for some reason notification integration is not allowed , but if you see the error is created with incident 37001. Snowflake monitor those incident and make changes as needed.
they have enabled notification integration after day, then its working fine.

Related

Trigger execution warnings with Logic App Standard on Azure Function Runtime

I've got a Logic App (Standard) running on an Azure Function runtime, and I've noticed I'm getting spammed with warnings for my O365 When a new email arrives in a shared mailbox (V2) trigger.
Trigger is meant to execute on cluster type 'Classic'. However, it is executing on cluster 'NotSpecified'
Just Created a Logic App (Standard) and created an O365 trigger on a shared mailbox v2 trigger.
Allowed the trigger to fire
Log Stream/AppInsights will show the warning about trigger execution: Trigger is meant to execute on cluster type 'Classic'. However, it is executing on cluster 'NotSpecified'
There can be several reasons for the error that is mentioned above. One way to solve this problem is by reconfiguring the O365 Logic App Connection and try again.
Usually it happens when there is a version mismatch of the library reference or older version of the component referenced in logic app. If still the error persist, error handling need to be implemented properly in logic apps to get detailed error. App Insight will give proper error log and provide us the error logs if issue still persist.
Please check this Handle errors and exceptions in Azure Logic Apps documentation from Microsoft for more information.

"An internal error occurred while ensuring the default service account exists" while creating Google App Engine

I am trying to create Google App Engine using create application option but i am getting below error :
An internal error occurred while ensuring the default service account exists.
Can you please help me with the solution
Tried creating with different location getting same error
Make sure the default App Engine Service Account is not missing (due to an accidental deletion e.g) from the IAM & admin > Service accounts section of the Cloud Console. It is named after your project followed by "#appspot.gserviceaccount.com".
If you do not see it, you can recover it by doing for example a REST API call as documented here:
POST https://iam.googleapis.com/v1/projects/[PROJECT-ID]/serviceAccounts/[SA-NAME]#[PROJECT-ID].iam.gserviceaccount.com:undelete
If the deletion was made more than 30 days ago, the only way left to fix the issue would be to create a new project and use its brand new default Service Account.

Azure logic app - Transform to XML - MapNotReady

I'm trying to translate an X12 edi message using a map created in VS2015, but I get the following error;
MapNotReady. The map '' is still being processed. Please try again later.
Running the input in VS2015 I get the correct result, but not using Azure Logic Apps
Resolved this issue by creating a new Integration account in a new Resource Group and different Location.
Looks like a bug in Azure, will log call with MS
I faced the same issue after deploying a logic app using ARM template.
What was I doing?
In deploy powershell, I was creating integration account and adding schemas and maps.
Deploying logic app using ARM template.
Immediately after deployment, I tried to execute the logic app. At that point, I received MapNotReady exception in transform action.
However after 10 minutes when I retried the message again, the problem was gone. It looks like, map service was not fully deployed.
So no need to deploy to different resource group. Probably wait for few minutes before executing LogicApps.

Alpakka KinesisSink : Can not push messages to Stream

I am trying to use the alpakka kinesis connector to send messages to a Kinesis Stream but I have no success with it. I tried the code below but nothing in my stream.
implicit val sys = ActorSystem()
implicit val mat = ActorMaterializer()
implicit val kinesisAsync: AmazonKinesisAsync = AmazonKinesisAsyncClientBuilder.defaultClient()
val debug = Flow[PutRecordsRequestEntry].map { reqEntry =>
println(reqEntry)
reqEntry
}
val entry = new PutRecordsRequestEntry()
.withData(ByteBuffer.wrap("Hello World".getBytes))
.withPartitionKey(Random.nextInt.toString)
Source.tick(1.second, 1.second, entry).to(KinesisSink("myStreamName", KinesisFlowSettings.defaultInstance)).run()
// 2) Source.tick(1.second, 1.second,entry).via(debug).to(KinesisSink("myStreamName", inesisFlowSettings.defaultInstance)).run()
Using a Sink.foreach(println) instead of KinesisSink prints out the PutRecordsRequestEntry every 1 second => EXPECTED
Using KinesisSink, the entry is generated only once.
What Am I doing wrong ?
I am checking my stream with a KinesisSource and reading is working ( tested with another stream)
Also the monitoring dashboard of AWS Kinesis doesnt show any PUT requests.
Note 1: I tried to enable the debug log of alpakka but with no effect
<logger name="akka.stream.alpakka.kinesis" level="DEBUG"/>
in my logback.xml + debug on root level
Some troubleshooting steps to consider below - I hope they help.
I suspect you're likely missing credentials and/or region configuration for your Kinesis client.
Kinesis Firehose
The Kinesis Producer Library (what Alpakka seems to be using) does not work with Kinesis Firehose. If you're trying to write to Firehose this isn't going to work.
Application Logging
You'll probably want to enable logging for the Kinesis Producer Library, not just in Alpakka itself. Relevant documentation is available here:
Configuring the Kinesis Producer Library
Configuration Defaults for Kinesis Producer Library
AWS Side Logging
AWS CloudTrail is automatically enabled out of the box for Kinesis streams, and by default AWS will keep 90 days of CloudTrail logs for you.
https://docs.aws.amazon.com/streams/latest/dev/logging-using-cloudtrail.html
You can use the CloudTrail logs to see the API calls your application is making to Kinesis on your behalf. There's usually a modest delay in requests showing up - but this will let you know if the request is failing due to insufficient IAM permissions or some other issue with your AWS resource configuration.
Check SDK Authentication
The Kinesis client will be using the DefaultAWSCredentialsProviderChain credentials provider to make requests to AWS.
You'll need to make sure you are providing valid AWS credentials with IAM rights to make those requests to Kinesis. If your code is running on AWS, the preferred way of giving your application credentials is using IAM Roles (specified at instance launch time).
You'll also need to specify the AWS Region when building the client in your code. Use your application.properties for configuring this, or if your application is part of a CloudFormation stack that lives in a single region - using the instance metadata service to retrieve the current region when your code is running on AWS.
The problem was an access denied / permission on the action on the stream.
I had to add the akka actor config for logging
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "DEBUG"
stdout-loglevel = "DEBUG"
logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
logger-startup-timeout = "30s"
}
to see debug lines and I actually run in debug and step in each stage.
It required permission "PutRecords" in the IAM role

Can I use StackDriver Trace PHP application in GKE?

I want to check latencies of RPC every day about CakePHP Application each endpoints running in GKE cluster. I found it is possible using php google client or zipkin server by reading documents , but I don't know how easy to introduce to our app though both seem tough for me.
In addition, I'm concerned about GKE cluster configuration has StackDriver Trace option though our cluster it sets disabled.Can we trace span if it sets enable?
Could you give some advices?
I succeeded to send gcp's trace api in php client via REST. It can see trace set by php client parameters , but my endpoint for trace api has stopped though I don't know why.Maybe ,it is not still supported well because the document have many ambiguous expression so, I realized watching server response by BigQuery with fluentd and DataStudio and it seem best solution because auto span can be set by table name with yyyymmdd and we can watch arbitrary metrics with custom query or calculation field.

Resources