How to enable binlog on aurora serverless - aws-aurora-serverless

I am setting up a new database using AWS Aurora serverless and have a requirement to enable binlog. I think I have followed the documentation as-is but can't get it to work. How do I set it up?
Following the documentation, below is what I have tried to enable binlog.
Created a custom Parameter Group of type as "DB cluster parameter group" and Family as Aurora5.6.
Changed binlog_format parameter to ROW for the parameter Group.
Created a new database with Role as serverless and Engine as "Aurora MySQL" and assigned parameter group created above.
Enabled backup retention to 3 days (enabled this as I saw some posts somewhere that unless you enable backups binlog doesn't really get enabled).
I have also tried to modify the DB and apply/force the parameter group by selecting "apply Immediately".
I expect the binlog is enabled after database goes from modifying to available state and I should be able to see the Global variable on the DB correctly set.
I see following -
mysql> select variable_value from information_schema.global_variables where variable_name='log_bin';
+----------------+
| variable_value |
+----------------+
| OFF |
+----------------+
1 row in set (0.01 sec)

The serverless version of Aurora only gives you a subset of parameters that you can change - see https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless.how-it-works.html#aurora-serverless.parameter-groups, and of course turning on binlogging is not available. So if you need your Aurora DB to act as a Master then don't use serverless!

Related

How to get snowflake host and port number to create connection in SAP Analytics Cloud?

The SAP Analytics Cloud's Snowflake Connector needs these details for setting up a Snowflake connection
[
How can I get these details from Snowflake?
I'm trying to follow this guide
It appears that you're attempting to configure SAP Analytics Cloud's Snowflake Connector.
The host and port of your Snowflake account (also known as its deployment URL) can be taken from the URL you use to connect to Snowflake's Web UI. Here's an example:
For the above URL, the input in the Server field of the form will be mzf0194.us-west-2.snowflakecomputing.com:443 (the 443 port number is the default HTTPS port that Snowflake serves on).
Or alternatively, if you have access to any other Snowflake connected application (such as SnowSQL, etc.) that lets you run a SQL query, run the following to extract it:
select t.value:host || ':443' snowflake
from table(flatten(parse_json(system$whitelist()))) t
where t.value:type = 'SNOWFLAKE_DEPLOYMENT';
An example output that carries the host/port:
+---------------------------------------------+
| SNOWFLAKE |
|---------------------------------------------|
| p7b41m.eu-west-1.snowflakecomputing.com:443 |
+---------------------------------------------+
If you're uncertain about what these all mean, you'll need to speak to other, current Snowflake users or administrators in your organization.

AWS DataPipeline insert status with SQLActivity

I am looking for a way to record the status of the pipeline in a DB table. Assuming this is a very common use case.
Is there any way where I can record
status and time of completion of the complete pipeline.
status and time of completion of selected individual activities.
the ID of individual runs/execution.
The only way I found was using SQLActivity that is dependent on an individual activity but even there I cannot access the status or timestamp of the parent/node.
I am using a jdbc connection to connect to a remote SQLServer. And the pipeline is for coping S3 files into the SQLServer DB.
Hmmm... I haven't tried this but I can hit you with some pointers to possibly achieve the desired results. However, you will have to do research & figure out actual implementation.
Option 1
Create a ShellCommandActivity, which has depends on set to last activity in your pipeline. Your shell will use aws-cli to list-runs details of the current run, you can use filters to achieve this.
Use Staging Data to move output of previous ShellActivity to SQLActivity to eventually insert into the destination SQLServer.
Option 2
Use AWS lambda to run aws-cli data-pipeline list-runs periodically, with filters, & update the destination table with latest activities. Resource

Analytics are not displayed real time WSO2 Analytics

I have configured a distributed Setup of API Manager 2.1.0 and configured Analytics 2.1.0 as well. It is taking too long to display the analytics after API Invocation.
Server 1 (1 Publisher instance,1 Store instance, 1 Analytics instance, 1 Traffic Manager instance)
Server 2 (1 Key Manager Instance, 1 Gateway Instance)
Server 3 (1 Key Manager Instance, 1 Gateway Instance)
It seems the batch scripts run only once per day though the cron expression is set as "0 0/5 * 1/1 * ? *" in few scripts such as APIM_STAT_SCRIPT, APIM_STAT_SCRIPT_THROTTLE, APIM_LAST_ACCESS_TIME_SCRIPT.
But when I try to execute those scripts manually I'm getting a warning as
"Scheduled task for the script : APIM_LAST_ACCESS_TIME_SCRIPT is already running. Please try again after the scheduled task is completed.".
But the data was not populated in Summary tables until next day.
I want these scripts to be executed every 15 mints.
When I configured single API Manager 2.1.0 instance with Analytics 2.1.0 in the same server it worked as expected.
How can I resolve this?
Since the no of analytics records are increasing day by day it is taking too long to execute batch scripts. That is the reason taking too long to display Analytics.
In order to increase the performance we can remove historical data using Data Purging option in APIM Analytics.
I was able to resolve above issue after purge historical data.
For more information refer https://docs.wso2.com/display/AM210/Purging+Analytics+Data

Auditing a Single User in SQL Server

I have an application that accesses my SQL Server using a username\password. However our lead developer has access to this password and I suspect that he may be utilizing this outside of our terms of agreement. My wish is to audit for anytime this username (appaccount) accesses our database and any commands that are issued.
Luckily this application uses purely stored procedures with passed in parameters when it accesses the database so anytime the account runs a T-SQL statement it has to be from our developer in question.
My ideal output would be something like this:
Datetime | Username | Action Performed
11:23am | appaccount | "Select * from claimstable"|
11:26am | appaccount | "update table ...(skip change control process)"|

Amazon RDS - are there workarounds to change a database time zone in SQL Server?

Amazon recently announced support for time zone change in Oracle RDS.
Since this is still not supported for Microsoft SQL Server 2012, are there any workarounds, to obtain functionality similar to changing the whole database time zone?
Since you're asking for workarounds...
We basically totally disregard server time/database time zone and work entirely off of UTC. GetUtcDate() for instance for all 'DateCreated' columns. Since we've committed to that approach we just don't bump up against any issues.
If you need to store the time zone alongside your date data, you can use DateTimeOffset.
The one caveat is that maintenance plans will be run on server time. This has not been an issue because we normalize everything to local time (which is not UTC and not server time) in any of our calendaring programs.
I did this with MySQL on RDS by changing my instance DB Parameter Group to a custom one that I can edit the parameters for.
I then created the following procedure:
DELIMITER |
CREATE PROCEDURE mysql.init_connect_procedure ()
IF NOT(POSITION(‘rdsadmin#’ IN user()) = 1)
THEN SET SESSION time_zone = 'America/New_York';
END IF |
DELIMITER ;
Note: every other instruction on the internet uses the function current_user() instead of user() which did not work for me!
The catch to this configuration is that then you have to give privileges to all your database users to be able to execute this function, or they won't even be able to connect to the database, so for every user and every future user you have to run this command. (and no there is no wildcard access to procedures)
GRANT EXECUTE ON PROCEDURE mysql.init_connect_procedure TO 'user'#'%' ;
I edited the parameter init_connect for to be set as CALL mysql.init_connect_procedure . I am sure SQL SERVER has an equivalent parameter if not the same.
Restart the server and you should be good!
Warning: The user rdsadmin is the root user that only Amazon has the password to and uses to maintain and backup the database. You don't want to change the timezone for this user or you might damage your entire database. Hence the code to make sure it is not this user. I really recommend making sure the user is the same for SQL SERVER, this solution is only for MySQL and is a terrible solution, unfortunatly I had no other choice. If you can avoid doing this handle the timezone on your application end.

Resources