I have Snowflake tasks that runs every 30 minutes. Currently, when the task fails due to underlying data issue in the stored procedure that the Task calls, there is no way to notify the users on the failure.
SELECT *
FROM TABLE(INFORMATION_SCHEMA.TASK_HISTORY());
How can notifications be setup for a Snowflake Task failure? The design plan I have in mind is to build a python application that runs every 30mins and looks for any error on TASK_HISTORY table. Please advise if there are any better approaches to handle failure notifications
I think currently a python script would the best way to address this.
You can use this SQL to query last runs, read into a data frame and filter out errors
select *
from table(information_schema.task_history(scheduled_time_range_start=>dateadd(minutes, -30,current_timestamp())))
It is possible to create Notification Integration and send message when error occurs. As of May 2022 this feature is in preview, supported by accounts on Amazon Web Servives.
Enabling Error Notifications for Tasks
This topic provides instructions for configuring error notification support for tasks using cloud messaging. This feature triggers a notification describing the errors encountered when a task executes SQL code
Currently, error notifications rely on cloud messaging provided by the
Amazon Simple Notification Service service;
support for Google Cloud Pub/Sub queue and Microsoft Azure Event Grid is planned.
New Tasks
Create a new task using CREATE TASK. For descriptions of all available task parameters, see the SQL command topic:
CREATE TASK <name>
[...]
ERROR_INTEGRATION = <integration_name>
AS <sql>
Existing tasks:
ALTER TASK <name> SET ERROR_INTEGRATION = <integration_name>;
A new Snowflake feature was announced for Task Error Notifications on AWS via SNS. This doc walks though how to set this up for task failures.
https://docs.snowflake.com/en/user-guide/tasks-errors.html
Related
I am having a multi-tenancy application developed with Spring boot. In the master table I hold information about the tenant databases. I am trying to create a service, that receives a row from the master table and creates the database for it. Is there a known way to do this in Spring boot? The only information I can find on internet is creating at start up of the application and this is not desired. The master-tenant table is in a master schema and has the following structure:
The method from the service is as follows:
public void createTenant(TenantDTO tenantDTO) {
tenantepository.save(new Tenant(tenantDTO));
MasterTenant masterTenant = new MasterTenant();
masterTenant.setDbName(tenantDTO.getTenantId());
masterTenant.setDriverClass("com.mysql.jdbc.Driver");
masterTenant.setTenantId(tenantDTO.getTenantId());
masterTenant.setPassword("password");
masterTenant.setStatus(EStatus.ACTIVE.name());
masterTenant.setUserName("root");
masterTenant.setUrl("jdbc:mysql://localhost:3306/"+tenantDTO.getTenantId());
DBContextHolder.setCurrentDb(masterTenant.getDbName());
masterTenantRepository.save(masterTenant);
multiTenantConnectionProvider.selectDataSource(masterTenant.getTenantId());
//create schema + create tables from existing entities or run a script of sql
}
I would need help figuring out the part with create schema and tables.
One Option that I can think of is having a back-end like Jenkins or AWS Lambda to which you can give the request to create the database in your server pool.
In case of choosing a Jenkins job, you have REST API exposed by Jenkins, which you can make use of to trigger the job. As it is more tied to the dev-ops you can use terraform like script in the jenkins job that accepts the tenant id as a parameter to set that as the database name etc.
Some advantages of using a Jenkins job are
The job can be long running
Supports queueing
Can easily integrate with devops and remove those dependencies with the application
API based access to trigger a job, view status and logs etc.
I am receiving the below error while trying to connect to Snowflake via Jitterbit cloud studio:
Error
Error Code: snowflake07
Stacktrace:
Error executing get activity. ,Stack Trace: org.jitterbit.connector.sdk.exceptions.ActivityExecutionException: Error executing get activity.
at org.jitterbit.connector.snowflake.activities.GetActivity.execute(GetActivity.java:94)
...
at java.lang.Thread.run(Thread.java:748)
Caused by: net.snowflake.client.jdbc.SnowflakeSQLException: No active warehouse selected in the current session. Select an active warehouse with the 'use warehouse' command.
The integration is configured in Jitterbit but not sure what setting I need to update in Snowflake to make a GET call.
Or is there a way to use the "USE WAREHOUSE" command in Jitterbit before connecting to Snowflake?
Snowflake requires "compute" resources to run queries, and these compute resources are called warehouses. Most client tools let you set login/configuration parameters and this is where they would set their warehouse for compute.
If Jitterbit does not allow this (though I think it's JDBC, so it should), perhaps you can simply set a default compute Warehouse for the user who is logging in, to do so you issue a ALTER USER command, such as the following:
ALTER USER your_user_id_here SET DEFAULT_WAREHOUSE = your_warehouse_name;
https://docs.snowflake.net/manuals/sql-reference/sql/alter-user.html
Setting the default warehouse for the user will most likely allow you to get past your initial connection issue.
My one of the table holds data for business transactions and I have to run a job when there's no transaction for interval 5 minutes, I am trying to achieve this using Timer() in java. So to get notified if any transaction is executed I need some triggering ( I do not have code access as it is 3rd party tool ) for that purpose I am using database change notification.
However while running this I get below error very often. I am using java 1.6, ojdbc6.jar for connection purpose and the application is running on weblogic with oracle 11g database.
Exception in thread "Thread-4" java.lang.IndexOutOfBoundsException at java.nio.Buffer.checkIndex(Buffer.java:540) at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:139) at oracle.jdbc.driver.NTFConnection.unmarshalOneNSPacket(NTFConnection.java:334) at oracle.jdbc.driver.NTFConnection.run(NTFConnection.java:182)
Please modify the example http://appcrawler.com/wordpress/2012/08/28/jdbc-and-oracle-database-change-notification/ for your listener and check if issue still exists. My understading that issue is not related to Oracle DB, but part of Java realization of your code. Please, add java tag into your question as well.
I am using SQL Server Broker on SQL Server 2008 for Scaleout with SignalR v2.1.2. It was recently discovered that we are producing 50k+ errors per day in our DB logs. After some research, there are 3 orphaned Service Broker queues from December. Error example:
2016-02-27 23:58:01.79 spid30s The activated proc '[dbo].[SqlQueryNotificationStoredProcedure-2ffbddba-6ddc-4ad0-88b4-45a405e975e0]' running on queue 'MY_SIGNALR_DB.dbo.SqlQueryNotificationService-2ffbddba-6ddc-4ad0-88b4-45a405e975e0' output the following: 'Could not find stored procedure 'dbo.SqlQueryNotificationStoredProcedure-2ffbddba-6ddc-4ad0-88b4-45a405e975e0'.'
These queues were created in December and were NOT dropped for some reason. The corresponding SPs were apparently dropped as expected. The DB will produce an error every 5 seconds for this (equates to 50k per day with 3 queues). Each queue DOES contain a message.
Questions:
What can cause this?
Are there additional SignalR settings that can be implemented to ensure these are cleaned up?
Is this a bug in SQL Server Service Broker?
Is there a document which describes SignalR's expected behavior with regards to Queues and their expiration?
Thank you for your time.
These are leftover from SqlDependency. The implementation of the SqlDependency.Start() is to create a just-in-time service, queue and activated procedure (see the reference source). This has some issues, and even a simple Visual Studio debugging session can leave stranded queues/activated procedures.
You can clean up these left-over service/queue/procedures as they happen, or you can choose to use the lower level SqlNotificationRequest class and handle the service/queue deployment on your own. Pick your poison.
I created a queue thus:
CREATE QUEUE log_line_queue
WITH RETENTION = ON, --can decrease performance
STATUS = ON,
ACTIVATION (
MAX_QUEUE_READERS = 1, --number of concurrent instances of sp_insert_log_line
PROCEDURE_NAME = sp_insert_log_line,
EXECUTE AS OWNER
);
What can I do quickly in SSMS to add an item to my queue using T-SQL?
In SSMS select required database in Object Explorer. Then find Service Broker of this database, right click on it and select 'New Service Broker Application...' command. This will create template for you to start using Service Broker quickly. Also you'll see minimal recommended configuration needed to implement and run your own application.
As for using one queue - if this is your first experience with Service Broker why not to follow common practice at the beginning? After running several samples and/or your own prototypes you decide how much queues to use and you know how to do it.