Analytics are not displayed real time WSO2 Analytics - analytics

I have configured a distributed Setup of API Manager 2.1.0 and configured Analytics 2.1.0 as well. It is taking too long to display the analytics after API Invocation.
Server 1 (1 Publisher instance,1 Store instance, 1 Analytics instance, 1 Traffic Manager instance)
Server 2 (1 Key Manager Instance, 1 Gateway Instance)
Server 3 (1 Key Manager Instance, 1 Gateway Instance)
It seems the batch scripts run only once per day though the cron expression is set as "0 0/5 * 1/1 * ? *" in few scripts such as APIM_STAT_SCRIPT, APIM_STAT_SCRIPT_THROTTLE, APIM_LAST_ACCESS_TIME_SCRIPT.
But when I try to execute those scripts manually I'm getting a warning as
"Scheduled task for the script : APIM_LAST_ACCESS_TIME_SCRIPT is already running. Please try again after the scheduled task is completed.".
But the data was not populated in Summary tables until next day.
I want these scripts to be executed every 15 mints.
When I configured single API Manager 2.1.0 instance with Analytics 2.1.0 in the same server it worked as expected.
How can I resolve this?

Since the no of analytics records are increasing day by day it is taking too long to execute batch scripts. That is the reason taking too long to display Analytics.
In order to increase the performance we can remove historical data using Data Purging option in APIM Analytics.
I was able to resolve above issue after purge historical data.
For more information refer https://docs.wso2.com/display/AM210/Purging+Analytics+Data

Related

Snowflake task failure notification

I have Snowflake tasks that runs every 30 minutes. Currently, when the task fails due to underlying data issue in the stored procedure that the Task calls, there is no way to notify the users on the failure.
SELECT *
FROM TABLE(INFORMATION_SCHEMA.TASK_HISTORY());
How can notifications be setup for a Snowflake Task failure? The design plan I have in mind is to build a python application that runs every 30mins and looks for any error on TASK_HISTORY table. Please advise if there are any better approaches to handle failure notifications
I think currently a python script would the best way to address this.
You can use this SQL to query last runs, read into a data frame and filter out errors
select *
from table(information_schema.task_history(scheduled_time_range_start=>dateadd(minutes, -30,current_timestamp())))
It is possible to create Notification Integration and send message when error occurs. As of May 2022 this feature is in preview, supported by accounts on Amazon Web Servives.
Enabling Error Notifications for Tasks
This topic provides instructions for configuring error notification support for tasks using cloud messaging. This feature triggers a notification describing the errors encountered when a task executes SQL code
Currently, error notifications rely on cloud messaging provided by the
Amazon Simple Notification Service service;
support for Google Cloud Pub/Sub queue and Microsoft Azure Event Grid is planned.
New Tasks
Create a new task using CREATE TASK. For descriptions of all available task parameters, see the SQL command topic:
CREATE TASK <name>
[...]
ERROR_INTEGRATION = <integration_name>
AS <sql>
Existing tasks:
ALTER TASK <name> SET ERROR_INTEGRATION = <integration_name>;
A new Snowflake feature was announced for Task Error Notifications on AWS via SNS. This doc walks though how to set this up for task failures.
https://docs.snowflake.com/en/user-guide/tasks-errors.html

Snowflake SSRS ODBC error : No active warehouse selected in the current session. Select an active warehouse with the 'use warehouse' command

I'm using SSRS (SQL Server reporting services) to display reports, my datasource is Snowflake
I have installed the ODBC snowflake driver and configured it properly
Click here to view the ODBC configuration
I have created a shared datasource on the SSRS server (via Report manager) and put in my own credentials and the connection works fine
Click here to view the connection on the SSRS Server
I'm able to build the SSRS report without any issues, when I run the report, everything works fine, I can publish the report on the server and the report renders perfectly fine on the browser
The issue is when i go back to the report the next day, i'm presented with an error:
An error has occurred during report processing. (rsProcessingAborted)
Query execution failed for dataset
'insert_name_of_my_dataset_here'. (rsErrorExecutingCommand)
ERROR [57P03] No active warehouse selected in the current session.
Select an active warehouse with the 'use warehouse' command.
So, this also means that the following doesn't work neither:
Subscriptions
Cache refresh
Snapshots
The only thing that works is if I open my report in SSRS Report builder, I right-click EACH of my datasets ("each" is very important, it doesn't work if i don't do all of them), I run the queries manually for each of them, and then the "connection" or "session" is "re-activated" and the report runs fine, both locally AND on the server...note i do not have to re-publish the report on the server for it to run
Click here to view screenshots of my process
Steps I have taken to addresss the issue (that didn't yield any resolution):
I have tried putting the "use warehouse WAREHOUSE_NAME;" command before each dataset's SQL script, but Snowflake's API doesn't allow multiple SQL commands to be sent, so I already saw that this functionality was in the development pipeline for Snowflake and found this link: https://github.com/snowflakedb/snowflake-connector-net/issues/33 - this work was started in 2018 and the last update dates from Apr 2019 that says they are starting to address the JDBC driver...no mention for the ODBC driver yet
I have set the snowflake parameter client-session-keep-alive to true (https://docs.snowflake.com/en/sql-reference/parameters.html#client-session-keep-alive), but according to the community portal: A similar "keep alive" parameter is not currently available for the ODBC driver. Instead, you could issue a dummy query every few hours to keep the connection alive. (https://community.snowflake.com/s/article/faq-how-long-can-my-jdbcodbc-connection-remain-idle)
List item
I have tried to create a cache refresh plan or a snapshot schedule that creates a snapshot or caches the report every 3 hours, and it works for the first schedule, but fails with the error for the other ones
The only thing I didn't try is to have snowflake never close the connection and keep the warehouse in the "started" state indefinitely...but this would increase my cost, and i'm pretty sure it won't work since the session would end anyways after 4 hours...
Any assistance is welcome!
Thanks
Specs:
SSRS 2014
Snowflake X-small
ODBC-64 bit driver, installed from the
snowflake driver repository (tested with 32-bit also, but 64-bit is
the one that is visible to SSRS)
I faced the same kind of issue and fixed adding the corresponding role with the data warehouse.
In the data warehouse add role with USAGE.
Could it be related with the data warehouse name (in the ODBC settings)? Is there a typo? COSNUMER_WH or CONSUMER_WH?
I strongly recommend setting default "context" configurations for situations like this, setting default role, warehouse, database, and schema with commands such as this:
ALTER USER xyz SET DEFAULT_WAREHOUSE = 'WH_NAME_HERE' ;
https://docs.snowflake.com/en/sql-reference/sql/alter-user.html

AWS DataPipeline insert status with SQLActivity

I am looking for a way to record the status of the pipeline in a DB table. Assuming this is a very common use case.
Is there any way where I can record
status and time of completion of the complete pipeline.
status and time of completion of selected individual activities.
the ID of individual runs/execution.
The only way I found was using SQLActivity that is dependent on an individual activity but even there I cannot access the status or timestamp of the parent/node.
I am using a jdbc connection to connect to a remote SQLServer. And the pipeline is for coping S3 files into the SQLServer DB.
Hmmm... I haven't tried this but I can hit you with some pointers to possibly achieve the desired results. However, you will have to do research & figure out actual implementation.
Option 1
Create a ShellCommandActivity, which has depends on set to last activity in your pipeline. Your shell will use aws-cli to list-runs details of the current run, you can use filters to achieve this.
Use Staging Data to move output of previous ShellActivity to SQLActivity to eventually insert into the destination SQLServer.
Option 2
Use AWS lambda to run aws-cli data-pipeline list-runs periodically, with filters, & update the destination table with latest activities. Resource

SqlIaaSExtension.Service broken on Azure SQL Server 2016 VM

Two month ago I deployed a new VM in Azure. I used the pre-configured "SQL Server 2016 SP1 Standard on Windows Server 2016" with 7 GB of RAM, and I chose the offered option to make backups automatically. Only other things I changed is add it to AD and put some databases (largest of ~2 GB size of backup file)
Now the server is running a service called SqlIaaSExtension.Service which I understand is for doing these backups as well as automated patching. You can find the services description here: MS service description
The problem is, it keeps on building up memory until after some weeks the SQL Server itself fails to execute larger queries. A restart of the SqlIaaSExtension.Service fixes the problem, but this is not at all a sustainable solution.
Does anybody know a working solution other then disabling the service and loosing the functionality altogether?
My setup (german):
I have meanwhile got some Information from Microsoft:
There seems to be an error in the SqlIaaSExtension.Service which is known to MS and will eventually be fixed.
Workaround is:
A: If you donĀ“t need the functionality - remove this service, as indicated in the service description.
B: If you want to keep the functionality - restart the service periodically. Possibly automate via Task-planner.
Updated info from MS 19/07/2017: Error is identified and should be fixed in the next 7-10 Days. A mitigation is restarting the service if necessary.
Updated info from MS 31/07/2017: Error should be fixed in Version 1.2.19.0. This can be checked from the Azure Portal under "extensions" in the VM-Menu.

Azure SQL Database Creation Issue

About 11 hours ago I've started a SQL database creation operation on Azure and it is still being processed. Since my other databases almost took seconds to be installed, it is obvious that there is a technical problem with this one.
AFAIK there is no option for a user to cancel the current process and start a new one. I have also tried to create another but the system throws the following error:
Unable to edit or replace deployment 'Microsoft.SQL.NewDatabase':
previous deployment from '11/24/2015 9:01:59 PM' is still active
(expiration time is '12/1/2015 9:01:59 PM').
Since I haven't purchased any support package I also cannot request it from the Azure team.

Resources