How to determine account_id and region from a query? - sql-server

I have datasets that are pulling from multiple Amazon RDS servers in multiple accounts, and I'd really like to be able to have the SQL Server instance tell me which account owns it and which region it lives in.
For example, this would be ideal when constructing ARNs on the fly:
SELECT id, 'arn:aws:quicksight:' + rdsadmin.dbo.get_region() +
':' + rdsadmin.dbo.get_account_id() + ':group/default/admin' AS groupArn
FROM my_rules_table
I've looked all over and I don't see a way to infer this information. I could create unique versions of those UDFs on every server with static values, but I'd really rather fetch the actual values dynamically.
EDIT:
Another way to think about my request is that I want to do in Amazon RDS what I can do in all my other EC2 instances:
read -r account_id region <<< $(curl -s http://169.254.169.254/latest/dynamic/instance-identity/document | jq -r '. | "\(.accountId) \(.region)"')
echo "arn:aws:quicksight:$region:$account_id:group/default/admin"
This is just a workaround because Amazon QuickSight has certain requirements on the supported SQL features used to fetch data.

I was unable to find the information exposed from Amazon RDS for Microsoft® SQL Server®, so I created a table to hold this information in each RDS instance:
CREATE TABLE rds_instance (
id UNIQUEIDENTIFIER NOT NULL DEFAULT(NEWID()),
account_id VARCHAR(20) NOT NULL,
environment VARCHAR(20) NOT NULL,
region VARCHAR(20) NOT NULL,
active BIT NOT NULL DEFAULT(0),
PRIMARY KEY (id)
);
The values for account_id, environment, and region can be plugged in and used where needed. A copied database can be programmatically modified for its new placement:
UPDATE rds_instance SET active = 0;
INSERT INTO rds_instance (account_id, environment, region, active)
VALUES ('12341234123', 'stage', 'us-southwest-7', 1);
The instance information can be used to produce ARNs in a query like so:
SELECT u.fkid, 'arn:aws:quicksight:' + ri.region + ':' + ri.account_id +
':user/' + ri.environment + '_ns/' + u.username AS userArn
FROM users AS u
JOIN rds_instance AS ri ON (ri.active = 1)

Related

Share dynamic tables between Flink programs

I have a Flink job that creates a Dynamic table from a database changelog stream. The table definition looks as follows:
tableEnv.sqlUpdate("""
CREATE TABLE some_table_name (
id INTEGER,
name STRING,
created_at BIGINT,
updated_at BIGINT
)
WITH (
'connector' = 'kafka',
'topic' = 'topic',
'properties.bootstrap.servers' = 'localhost:9092',
'properties.zookeeper.connect' = 'localhost:2181',
'properties.group.id' = 'group_1',
'format' = 'debezium-json',
'debezium-json.schema-include' = 'true'
)
""")
When trying to reference that table in another running Flink application on the same cluster, my program returns an error: SqlValidatorException: Object 'some_table_name' not found. Is it possible to register that table somehow such that other programs can use it? For example in a statement like this:
tableEnv.sqlQuery("""
SELECT count(*) FROM some_table_name
""").execute().print()
Note that a table in Flink doesn't hold any data. Another Flink application can independently create another table backed by the same Kafka topic, for example . So not sharing tables between applications isn't as tragic as you might expect.
But you can share tables by storing them in an external catalog. E.g., you could use an Apache Hive catalog for this purpose. See the docs for more info.

Azure SQL: Adding from Blob Not Recognizing Storage

I am trying to load data from a CSV file to a table in my Azure Database following the steps in https://learn.microsoft.com/en-us/sql/t-sql/statements/bulk-insert-transact-sql?view=sql-server-ver15#f-importing-data-from-a-file-in-azure-blob-storage, using the Managed Identity option. When I run the query, I receive this error:
Failed to execute query. Error: Referenced external data source "adfst" not found.
This is the name of the container I created within my storage account. I have also tried using my storage account, with the same error. Reviewing https://learn.microsoft.com/en-us/sql/relational-databases/import-export/examples-of-bulk-access-to-data-in-azure-blob-storage?view=sql-server-ver15 does not provide any further insight as to what may be causing the issue. My storage account does not have public (anonymous) access configured.
I'm assuming that I'm missing a simple item that would resolve this issue, but I can't figure out what it is. My SQL query is below, modified to not include content that should not be required.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '**************';
GO
CREATE DATABASE SCOPED CREDENTIAL msi_cred WITH IDENTITY = '***********************';
CREATE EXTERNAL DATA SOURCE adfst
WITH ( TYPE = BLOB_STORAGE,
LOCATION = 'https://**********.blob.core.windows.net/adfst'
, CREDENTIAL= msi_cred
);
BULK INSERT [dbo].[Adventures]
FROM 'Startracker_scenarios.csv'
WITH (DATA_SOURCE = 'adfst');
If you want to use Managed Identity to access Azure Blob storage when you run BULK INSERT command. You need to enable Managed Identity for the SQL server. Otherwise, you will get the error Referenced external data source "***" not found. Besides, you also need to assign Storage Blob Data Contributor to the MSI. If you do not do that, you cannot access the CVS file storing in Azure blob
For example
Enable Managed Identity for the SQL server
Connect-AzAccount
#Enable MSI for SQL Server
Set-AzSqlServer -ResourceGroupName your-database-server-resourceGroup -ServerName your-SQL-servername -AssignIdentity
Assign role via Azure Portal
Under your storage account, navigate to Access Control (IAM), and select Add role assignment. Assign Storage Blob Data Contributor RBAC role to the server which you've registered with Azure Active Directory (AAD)
Test
a. Data
1,James,Smith,19750101
2,Meggie,Smith,19790122
3,Robert,Smith,20071101
4,Alex,Smith,20040202
b. script
CREATE TABLE CSVTest
(ID INT,
FirstName VARCHAR(40),
LastName VARCHAR(40),
BirthDate SMALLDATETIME)
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'YourStrongPassword1';
GO
--> Change to using Managed Identity instead of SAS key
CREATE DATABASE SCOPED CREDENTIAL msi_cred WITH IDENTITY = 'Managed Identity';
GO
CREATE EXTERNAL DATA SOURCE MyAzureBlobStorage
WITH ( TYPE = BLOB_STORAGE,
LOCATION = 'https://jimtestdiag417.blob.core.windows.net/test'
, CREDENTIAL= msi_cred
);
GO
BULK INSERT CSVTest
FROM 'mydata.csv'
WITH (
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n',
DATA_SOURCE = 'MyAzureBlobStorage');
GO
select * from CSVTest;
GO

How to check if SQL Server Tables are System Tables

Using the stored procedure sp_msforeachtable it's possible to execute a script for all tables in a database.
However, there are system tables which I'd like to exclude from that. Instinctively, I would check the properties IsSystemTable or IsMSShipped. These don't work like I expect - I have for example a table called __RefactorLog:
But when I query if this is a system or MS Shipped table, SQL Server reports none of my tables are system tables:
exec (N'EXEC Database..sp_msforeachtable "PRINT ''? = '' + CAST(ObjectProperty(Object_ID(''?''), ''IsSystemTable'') AS VARCHAR(MAX))"') AS LOGIN = 'MyETLUser'
-- Results of IsSystemTable:
[dbo].[__RefactorLog] = 0
[schema].[myUserTable] = 0
and
exec (N'EXEC Database..sp_msforeachtable "PRINT ''? = '' + CAST(ObjectProperty(Object_ID(''?''), ''IsMSShipped'') AS VARCHAR(MAX))"') AS LOGIN = 'MyETLUser'
-- Results of IsMSShipped:
[dbo].[__RefactorLog] = 0
[schema].[myUserTable] = 0
When I look into the properties of the table (inside SSMS), the table is marked as a system object. An object property like IsSystemObject doesn't exist though (AFAIK).
How do I check if a table is a system object, apart from the object property? How does SSMS check if a table is a system object?
Management studio 2008 seems to run some quite ugly following code when opening the "System Tables" folder in the object explorer, the key bit seems to be:
CAST(
case
when tbl.is_ms_shipped = 1 then 1
when (
select
major_id
from
sys.extended_properties
where
major_id = tbl.object_id and
minor_id = 0 and
class = 1 and
name = N''microsoft_database_tools_support'')
is not null then 1
else 0
end
AS bit) AS [IsSystemObject]
(Where tbl is an alias for sys.tables)
So it seems that it's a combination - either is_ms_shipped from sys.tables being 1, or having a particular extended property set.
__refactorlog is, in contrast to what SSMS suggests, a user table. It is used during deployment to track schema changes that cannot be deduced from the current database state, for example renaming a table.
If all your other user tables are in a custom (non-dbo) schema, you can use a combination of the isMSshipped/isSystemTable attributes and the schema name to decide if a table is 'in scope' for your script.
In the past I've worked on the assumption that, in the sys.objects table, column is_ms_shipped indicates whether an object is or is not a system object. (This column gets inherited by other system tables, such as sys.tables.)
This flag can be set by procedure sp_ms_markSystemObject. This, however, is an undocumented procedure, is not supported by Microsoft, I don't think we're supposed to know about it, so I didn't tell you about it.
Am I missing something?
However, there are system tables which I'd like to exclude from that
At least on SQL Server 2008, sp_MSforeachtable already excludes system tables, as this excerpt from it shows:
+ N' where OBJECTPROPERTY(o.id, N''IsUserTable'') = 1 ' + N' and o.category & ' + #mscat + N' = 0 '

SQL Server 2008 Resource Management - By User/Group/Database?

Is it possible to put a ceiling on the amount of CPU a certain database or preferably a certain user, or group of users can use?
I have one app on a server that is maxing out the CPU due to complex queries that are being created at runtime. Until the specific module in that application is redeveloped (which could take months) i need to find a way to stop that specific application from choking SQL Server and reducing availability for other apps on the same server.
Do you look at Resource Governor? It makes possible to devde the resources between the group of users.
This is only example and for full information you need to look at documentation:
Create the pools:
create resource pool Pool1
with (
min_cpu_percent = 30
, max_cpu_percent = 100
)
create resource pool Pool2
with (
min_cpu_percent = 50
, max_cpu_percent = 70
)
create resource pool Pool3
with (
min_cpu_percent = 5
, max_cpu_percent = 100
)
Create the workload groups and link them to pools:
create workload group Group1
using Pool1
create workload group Group2
using Pool2
create workload group Group3
using Pool3
Create the function which will be make decision which workload group will be used for current session:
create function dbo.rg_class_simple() returns sysname
with schemabinding
as begin
declare
#grp_name as sysname
if (suser_name() = 'user1')
set #grp_name = 'Group1'
else if (suser_name() = 'user2')
set #grp_name = 'Group2'
else if (suser_name() = 'user3')
set #grp_name = 'Group3'
return #grp_name
end

In SQL Server 2005, is there an easy way to "copy" permissions on an object from one user/role to another?

I asked another question about roles and permissions, which mostly served to reveal my ignorance. One of the other outcomes was the advice that one should generally stay away from mucking with permissions for the "public" role.
OK, fine, but if I've already done so and want to re-assign the same permissions to a custom/"flexible" role, what's the best way to do that? What I've done so far is to run the Scripting wizard, and tell it to script object permissions without CREATE or DROP, then run a find-replace so that I wind up with a lot of "GRANT DELETE on [dbo.tablename] TO [newRole]". It gets the job done, but I feel like it could be prettier/easier. Any "best practice" suggestions?
Working from memory (no SQL on my gaming 'pooter), you can use sys.database_permissions
Run this and paste the results into a new query.
Edit, Jan 2012. Added OBJECT_SCHEMA_NAME.
You may need to pimp it to support schemas (dbo.) by joining onto sys.objects
SET NOCOUNT ON;
DECLARE #NewRole varchar(100), #SourceRole varchar(100);
-- Change as needed
SELECT #SourceRole = 'Giver', #NewRole = 'Taker';
SELECT
state_desc + ' ' +
permission_name + ' ON ' +
OBJECT_SCHEMA_NAME(major_id) + '.' + OBJECT_NAME(major_id) +
' TO ' + #NewRole
FROM
sys.database_permissions
WHERE
grantee_principal_id = DATABASE_PRINCIPAL_ID(#SourceRole)
AND
-- 0 = DB, 1 = object/column, 3 = schema. 1 is normally enough
class <= 3;
The idea of having a role is that you only need to setup the permissions once. You can then assign users, or groups of users to that role.
It's also possible to nest roles, so that a role can contain other roles.
Not sure if its best practice, but it makes sense that if you have a complex set of permissions, with groups of users that need access to multiple applications you go something like:
NT User -> NT Security Group -> SQL Server Role -> SQL Server Role A, Role B ...

Resources