Is there an Erwin Macro to pull out the table comment/definition? - database

I created a script template to generate the extendend properties to basically to include the data dictionary in the database, however, i couldn't find any macro to read-out the table comment from the model. Is there any hack for this?

The is DBMS specific in ERwin.
Depending on the dbms - SQL server or oracle will have a FET template.
for sql server.
if you edit the sql server template for creating schema. which in sql server is what generate comments or udp extended properties. here is what is shown.
/* Generate comments and UDP's as Extended Properties. */
[
/* Set the variables required by the "Clause: Specify Extended Properties". */
Set( "var_RemoveVariables", "true" )
Set( "var_Operation", "sp_addextendedproperty" )
Set( "var_Comment", "Definition" )
Set( "var_Level0Type", "SCHEMA" )
Set( "var_Level0Name", Property( "Name" ) )
/* Generate Schema comments and UDPs */
Execute( "Clause: Specify Extended Properties" )
]
[
FE::Bucket( "150" )
ForEachOwnee( "Permission" )
{
Execute( "Create Permission" )
}
]
]

Related

Query internal stage Snowflake

Following the steps in the documentation I created a stage and a file format in Snowflake, then staged a csv file with PUT
USE IA;
CREATE OR REPLACE STAGE csv_format_2;
CREATE OR REPLACE FILE FORMAT csvcol26 type='csv' field_delimiter='|';
PUT file://H:\\CSV_SWF_file_format_stage.csv #IA.public.csv_format_2
When I tried to query the staged object
SELECT a.$1 FROM #csv_format_2 (FORMAT=>'csvcol26', PATTERN=>'CSV_SWF_file_format_stage.csv.gz') a
I got:
SQL Error [2] [0A000]: Unsupported feature 'TABLE'.
Any idea on this error?
The first argument should be FILE_FORMAT instead of FORMAT:
SELECT a.$1
FROM #csv_format_2 (FILE_FORMAT=>'csvcol26',PATTERN=>'CSV_SWF_file_format_stage.csv.gz') a;
Related: Querying Data in Staged Files
Query staged data files using a SELECT statement with the following syntax:
SELECT [<alias>.]$<file_col_num>[.<element>] [ , [<alias>.]$<file_col_num>[.<element>] , ... ]
FROM { <internal_location> | <external_location> }
[ ( FILE_FORMAT => '<namespace>.<named_file_format>', PATTERN => '<regex_pattern>' ) ]
[ <alias> ]

How to deploy PostgreSQL schema to Google Cloud SQL database using Pulumi?

I'm trying to initialize a managed PostgreSQL database using Pulumi. The PostgreSQL server itself is hosted and managed by Google Cloud SQL, but I set it up using Pulumi.
I can successfully create the database, but I'm stumped how to actually initialize it with my schemas, users, tables, etc. Does anyone know how to achieve this?
I believe I need to use the Postgres provider, similar to what they do for MySQL in this tutorial or this example. The below code shows what I have so far:
# Create database resource on Google Cloud
instance = sql.DatabaseInstance( # This works
"db-instance",
name="db-instance",
database_version="POSTGRES_12",
region="europe-west4",
project=project,
settings=sql.DatabaseInstanceSettingsArgs(
tier="db-g1-small", # Or: db-n1-standard-4
activation_policy="ALWAYS",
availability_type="REGIONAL",
backup_configuration={
"enabled": True,
}
),
deletion_protection=False,
)
database = sql.Database( # This works as well
"db",
name="db",
instance=instance.name,
project=project,
charset="UTF-8",
)
# The below should create a table such as
# CREATE TABLE users (id uuid, email varchar(255), api_key varchar(255);
# How to tell it to use this SQL script?
# How to connect it to the above created PostgreSQL resource?
postgres = pg.Database( # This doesn't work
f"users",
name="users",
is_template=False,
)
Here is sample code with an explanation on how we set everything up including create/delete table with Pulumi.
The code will look like this:
# Postgres https://www.pulumi.com/docs/reference/pkg/postgresql/
# provider: https://www.pulumi.com/docs/reference/pkg/postgresql/provider/
postgres_provider = postgres.Provider("postgres-provider",
host=myinstance.public_ip_address,
username=users.name,
password=users.password,
port=5432,
superuser=True)
# creates a database on the instance in google cloud with the provider we created
mydatabase = postgres.Database("pulumi-votes-database",
encoding="UTF8",
opts=pulumi.ResourceOptions(provider=postgres_provider)
)
# Table creation/deletion is via pg8000 https://github.com/tlocke/pg8000
def tablecreation(mytable_name):
print("tablecreation with:", mytable_name)
create_first_part = "CREATE TABLE IF NOT EXISTS"
create_sql_querty = "(id serial PRIMARY KEY, email VARCHAR ( 255 ) UNIQUE NOT NULL, api_key VARCHAR ( 255 ) NOT NULL)"
create_combined = f'{create_first_part} {mytable_name}{create_sql_querty}'
print("tablecreation create_combined_sql:", create_combined)
myconnection=pg8000.native.Connection(
host=postgres_sql_instance_public_ip_address,
port=5432,
user=postgres_sql_user_username,
password=postgres_sql_user_password,
database=postgres_sql_database_name
)
print("tablecreation starting")
cursor=myconnection.run(create_combined)
print("Table Created:", mytable_name)
selectversion = 'SELECT version();'
cursor2=myconnection.run(selectversion)
print("SELECT Version:", cursor2)
def droptable(table_to_drop):
first_part_of_drop= "DROP TABLE IF EXISTS "
last_part_of_drop= ' CASCADE'
combinedstring = f'{first_part_of_drop} {table_to_drop} {last_part_of_drop}'
conn=pg8000.native.Connection(
host=postgres_sql_instance_public_ip_address,
port=5432,
user=postgres_sql_user_username,
password=postgres_sql_user_password,
database=postgres_sql_database_name
)
print("droptable delete_combined_sql ", combinedstring)
cursor=conn.run(combinedstring)
print("droptable completed ", cursor)
After the 1st time of bringing the infrastructure up via pulumi up -y, you can uncomment the following code block in __main__.py and then add the configs for the postgressql server via cli and then run pulumi up -y
create_table1 = "votertable"
creating_table = tablecreation(create_table1)
print("")
create_table2 = "regionals"
creating_table = tablecreation(create_table2)
print("")
drop_table = "table2"
deleting_table = droptable(drop_table)
The settings for the table are in the Pulumi.dev.yaml file and are set via pulumi config set

Create External Table in Azure SQL Data warehouse to a wild card based file or folder path

I know we can create an External table in Azure SQL Data warehouse pointing to
a LOCATION that is either a file path or a folder path. Can this file or folder path be based on a wild card pattern instead of an explicit path.
Here my file path is a location in Azure Data Lake Store.
-- Syntax for SQL Server
-- Create a new external table
CREATE EXTERNAL TABLE [ database_name . [ schema_name ] . | schema_name. ] table_name
( <column_definition> [ ,...n ] )
WITH (
**LOCATION = 'folder_or_filepath'**,
DATA_SOURCE = external_data_source_name,
FILE_FORMAT = external_file_format_name
[ , <reject_options> [ ,...n ] ]
)
[;]
Polybase / External Tables do not support wildcards at this time. Simply have one folder per external table you require. If you feel this is an important missing feature you can create a request and vote for it here:
https://feedback.azure.com/forums/307516-sql-data-warehouse
Bear in mind Polybase (in Azure SQL Data Warehouse) can now read files either in blob storage or in Azure Data Lake Storage (ADLS). Therefore as another workaround, Azure Data Lake Analytics (ADLA) and U-SQL support Polybase, so you could use U-SQL to move the files you want from blob store into your lake, eg
// Move data from blob store to data lake
// add filename and structure as one file
DECLARE #inputFilepath string = "wasb://someContainer#someStorageAccount.blob.core.windows.net/someFilter/{filepath}.csv";
DECLARE #outputFilepath string = "output/special folder/output.csv";
#input =
EXTRACT
... // your column list
filepath string
FROM #inputFilepath
USING Extractors.Csv()
#input =
SELECT * FROM #input
WHERE filename.Contains("yourFilter");
// Export as csv
OUTPUT #input
TO #outputFilepath
USING Outputters.Csv(quoting:false);
// Now the data is in Data Lake which Polybase can also use as a source

Is it possible to change name of the system table

I want to change name of the system table in my database is it possible? Probably I shouldn't change it but I'm curious.
When I execute sp_rename I get the following error:
Msg 15001, Level 16, State 1, Procedure sp_rename, Line 404
Object 'cdc.[dbo_CdcTest_CT]' does not exist or is not a valid object for this operation.
Edit:
I want to change name of tables created by Change Data Capture because I want to disable CDC mechanism for table and still have data - I know that I can create additional table and move there data from CDC table but it's easier to change name of the CDC and then disable cdc for specified table.
No you cannot change the name of the system tables. However you can refer it with a different name.
You can use synonyms for that:
CREATE SYNONYM [ schema_name_1. ] synonym_name FOR <object>
<object> :: =
{
[ server_name.[ database_name ] . [ schema_name_2 ].| database_name . [ schema_name_2 ].| schema_name_2. ] object_name
}
Also to mention that sp_rename
Changes the name of a user-created object in the current database.
This object can be a table, index, column, alias data type, or
Microsoft .NET Framework common language runtime

copy only tables with data from one database to another database

I have two database , dbOne(version - 10.50.1600 - locate in office server ) and dbTwo(version - 10.0.1600 - locate in my local server) .
I want to copy dbOne's tables with data to dbTwo .
Is there any way or script to do it ? I don't want to upgrade my local server-version !
"Import and Export Data" tool provided by SQL Server is a good tool to transfer data between two different servers.
How about generating the database scripts like in the following artcles
http://www.codeproject.com/Articles/598148/Generate-insert-statements-from
and
http://msdn.microsoft.com/en-us/library/ms186472(v=sql.105).aspx
Its possible to transfer data from one server to another server using SQL linked server query, if both are in a same network. below are the steps
Copying table structures
Generate script of all tables from server1 database then excute in server2 database. using Generate Script utility
Copying table data
sp_addlinkedserver [ #server= ] 'server' [ , [ #srvproduct= ] 'product_name' ]
[ , [ #provider= ] 'provider_name' ]
[ , [ #datasrc= ] 'data_source' ]
[ , [ #location= ] 'location' ]
[ , [ #provstr= ] 'provider_string' ]
[ , [ #catalog= ] 'catalog' ]
Insert into databaseserver2.db1.table1(columnList)
select columnList
from databaseserver1.db1.table1
Here are general steps you need to take in order for this to work
Migrating tables
Create scripts for tables in db1. Just right click the table and go to “Script table as -> Create to”
Re-order the scripts so that tables that don’t depend on any other tables are executed first
Execute scripts on db2
Migrating data
The most convenient way is to use SQL Server Import/Export wizard

Resources