Snowpark SQL compilation error: unexpected '-' in Role name - snowflake-cloud-data-platform

When trying to connect to Snowpark using the session method below with role, database, schema, and warehouse names, there is a SQL compilation error with the role name since it contains dashes.
dbname = "MY_DB"
schemaname = "MY_SCHEMA"
warehouse = "MY_WH"
read_session.sql(r"USE ROLE MY-SNOWFLAKE-ROLE").collect()
read_session.sql(f"USE WAREHOUSE {warehouse}").collect()
read_session.sql(f"USE DATABASE {dbname}").collect()
read_session.sql(f"USE SCHEMA {dbname}.{schemaname}").collect()

The role has to be contained in double quotes while your entire USE statement needs to be within single quotes.
dbname = "MY_DB"
schemaname = "MY_SCHEMA"
warehouse = "MY_WH"
read_session.sql(r'USE ROLE "MY-SNOWFLAKE-ROLE"').collect()
read_session.sql(f"USE WAREHOUSE {warehouse}").collect()
read_session.sql(f"USE DATABASE {dbname}").collect()
read_session.sql(f"USE SCHEMA {dbname}.{schemaname}").collect()

Related

What permissions do we need to create file_format in Snowflake

I am trying to create file_format to create stage in snowflake with custom role. I have assigned privileges to create stage and use the storage integrations, schema, database but it is still showing me error "SQL access control error: Insufficient privileges to operate on schema 'PUBLIC'.
It's able to create stage without file_format parameter but file_format is required for creating table.
Thanks
The code so far I have tried
grant create stage on schema public to role my_role2;
grant usage on integration s3_int to role my_role2;
GRANT USAGE, MONITOR ON ALL SCHEMAS IN DATABASE test TO ROLE my_role2;
grant create table on schema TEST.PUBLIC to role my_role2;
create or replace file format my_csv_format
type = csv field_delimiter = ',' skip_header = 1
field_optionally_enclosed_by = '"'
null_if = ('NULL', 'null')
empty_field_as_null = true;
create or replace stage demo_stage url=''
STORAGE_INTEGRATION="s3_int"
file_format = my_csv_format;
Creating file _format is giving error ""SQL access control error: Insufficient privileges to operate on schema 'PUBLIC'. It's able to create stage without file_format parameter but file_format is required for creating table.
As documented here, your role needs the CREATE FILE FORMAT privilege on the schema

How to Run DDL on Snowflake SHOW <object<

I am not sure if it is possible but can you run DDL the SHOW Role function:
show grants to role SYSADMIN
If I run this it shows me the privs attached to the role in the results field but I would like to run a sub-query on this but it seems to always give me an error.
You can't use the rows in the show command directly (unless you're using an external client and programming the code, such as ODBC, JDBC, Python).
What you can do in a client worksheet is use the results indirectly like this:
show grants to role sysadmin;
select * from table(result_scan(last_query_id()));
To use it in a query, just alias and reference it like a table:
show grants to role sysadmin;
show grants to role my_new_role;
select NR.*
from table(result_scan(last_query_id())) NR
inner join table(result_scan(last_query_id(-2))) SA
on NR."privilege" = SA."privilege"
and NR."granted_on" = SA."granted_on"
and NR."name" = SA."name"
;
You can then use the age-old DBA trick of creating a SQL generator:
select 'grant ' || "privilege" || ' on ' || "granted_on" || ' etc.. etc...' as SQL_COMMAND
from table(result_scan(last_query_id()))
where "privilege" <> 'OWNERSHIP'
;
You can even use an SP to automate execution of the generated commands:
https://community.snowflake.com/s/article/Executing-Multiple-SQL-Statements-in-a-Stored-Procedure

How to set the CONNECTION_OPTIONS = 'ApplicationIntent=ReadOnly' for elastic queries on Azure SQL?

I am accessing the other database using elastic queries. The data source was created like this:
CREATE EXTERNAL DATA SOURCE TheCompanyQueryDataSrc WITH (
TYPE = RDBMS,
--CONNECTION_OPTIONS = 'ApplicationIntent=ReadOnly',
CREDENTIAL = ElasticDBQueryCred,
LOCATION = 'thecompanysql.database.windows.net',
DATABASE_NAME = 'TheCompanyProd'
);
To reduce the database load, the read-only replica was created and should be used. As far as I understand it, I should add the CONNECTION_OPTIONS = 'ApplicationIntent=ReadOnly' (commented out in the above code). However, I get only the Incorrect syntax near 'CONNECTION_OPTIONS'
Both databases (the one that sets the connection + external tables, and the other to-be-read-only are at the same server (thecompanysql.database.windows.net). Both are set the compatibility lever SQL Server 2019 (150).
What else should I set to make it work?
The CREATE EXTERNAL DATA SOURCE Syntax doesn't support the option CONNECTION_OPTIONS = 'ApplicationIntent=ReadOnly'. We can't use that in the statements.
If you want achieve that readonly request, the way is that please use the user account which only has the readonly(db_reader) permission to login the external database.
For example:
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>' ;
CREATE DATABASE SCOPED CREDENTIAL SQL_Credential
WITH
IDENTITY = '<username>' -- readonly user account,
SECRET = '<password>' ;
CREATE EXTERNAL DATA SOURCE MyElasticDBQueryDataSrc
WITH
( TYPE = RDBMS ,
LOCATION = '<server_name>.database.windows.net' ,
DATABASE_NAME = 'Customers' ,
CREDENTIAL = SQL_Credential
) ;
Since the option is not supported, then we can't use it with elastic query. The only way to connect to the Azure SQL data with SSMS is like this:
HTH.

Azure SQL: Adding from Blob Not Recognizing Storage

I am trying to load data from a CSV file to a table in my Azure Database following the steps in https://learn.microsoft.com/en-us/sql/t-sql/statements/bulk-insert-transact-sql?view=sql-server-ver15#f-importing-data-from-a-file-in-azure-blob-storage, using the Managed Identity option. When I run the query, I receive this error:
Failed to execute query. Error: Referenced external data source "adfst" not found.
This is the name of the container I created within my storage account. I have also tried using my storage account, with the same error. Reviewing https://learn.microsoft.com/en-us/sql/relational-databases/import-export/examples-of-bulk-access-to-data-in-azure-blob-storage?view=sql-server-ver15 does not provide any further insight as to what may be causing the issue. My storage account does not have public (anonymous) access configured.
I'm assuming that I'm missing a simple item that would resolve this issue, but I can't figure out what it is. My SQL query is below, modified to not include content that should not be required.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '**************';
GO
CREATE DATABASE SCOPED CREDENTIAL msi_cred WITH IDENTITY = '***********************';
CREATE EXTERNAL DATA SOURCE adfst
WITH ( TYPE = BLOB_STORAGE,
LOCATION = 'https://**********.blob.core.windows.net/adfst'
, CREDENTIAL= msi_cred
);
BULK INSERT [dbo].[Adventures]
FROM 'Startracker_scenarios.csv'
WITH (DATA_SOURCE = 'adfst');
If you want to use Managed Identity to access Azure Blob storage when you run BULK INSERT command. You need to enable Managed Identity for the SQL server. Otherwise, you will get the error Referenced external data source "***" not found. Besides, you also need to assign Storage Blob Data Contributor to the MSI. If you do not do that, you cannot access the CVS file storing in Azure blob
For example
Enable Managed Identity for the SQL server
Connect-AzAccount
#Enable MSI for SQL Server
Set-AzSqlServer -ResourceGroupName your-database-server-resourceGroup -ServerName your-SQL-servername -AssignIdentity
Assign role via Azure Portal
Under your storage account, navigate to Access Control (IAM), and select Add role assignment. Assign Storage Blob Data Contributor RBAC role to the server which you've registered with Azure Active Directory (AAD)
Test
a. Data
1,James,Smith,19750101
2,Meggie,Smith,19790122
3,Robert,Smith,20071101
4,Alex,Smith,20040202
b. script
CREATE TABLE CSVTest
(ID INT,
FirstName VARCHAR(40),
LastName VARCHAR(40),
BirthDate SMALLDATETIME)
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'YourStrongPassword1';
GO
--> Change to using Managed Identity instead of SAS key
CREATE DATABASE SCOPED CREDENTIAL msi_cred WITH IDENTITY = 'Managed Identity';
GO
CREATE EXTERNAL DATA SOURCE MyAzureBlobStorage
WITH ( TYPE = BLOB_STORAGE,
LOCATION = 'https://jimtestdiag417.blob.core.windows.net/test'
, CREDENTIAL= msi_cred
);
GO
BULK INSERT CSVTest
FROM 'mydata.csv'
WITH (
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n',
DATA_SOURCE = 'MyAzureBlobStorage');
GO
select * from CSVTest;
GO

In the tutorial "Tutorial: Bulk Loading from a local file system using copy" what is the difference between my_stage and my_table permissions?

I started to go through the first tutorial for how to load data into Snowflake from a local file.
This is what I have set up so far:
CREATE WAREHOUSE mywh;
CREATE DATABASE Mydb;
Use Database mydb;
CREATE ROLE ANALYST;
grant usage on database mydb to role sysadmin;
grant usage on database mydb to role analyst;
grant usage, create file format, create stage, create table on schema mydb.public to role analyst;
grant operate, usage on warehouse mywh to role analyst;
//tutorial 1 loading data
CREATE FILE FORMAT mycsvformat
TYPE = "CSV"
FIELD_DELIMITER= ','
SKIP_HEADER = 1;
CREATE FILE FORMAT myjsonformat
TYPE="JSON"
STRIP_OUTER_ARRAY = true;
//create stage
CREATE OR REPLACE STAGE my_stage
FILE_FORMAT = mycsvformat;
//Use snowsql for this and make sure that the role, db, and warehouse are seelcted: put file:///data/data.csv #my_stage;
// put file on stage
PUT file://contacts.csv #my
List #~;
list #%mytable;
Then in my active Snowsql when I run:
Put file:///Users/<user>/Documents/data/data.csv #my_table;
I have confirmed I am in the correct role Accountadmin:
002003 (02000): SQL compilation error:
Stage 'MYDB.PUBLIC.MY_TABLE' does not exist or not authorized.
So then I try to create the table in Snowsql and am successful:
create or replace table my_table(id varchar, link varchar, stuff string);
I still run into this error after I run:
Put file:///Users/<>/Documents/data/data.csv #my_table;
002003 (02000): SQL compilation error:
Stage 'MYDB.PUBLIC.MY_TABLE' does not exist or not authorized.
What is the difference between putting a file to a my_table and a my_stage in this scenario? Thanks for your help!
EDIT:
CREATE OR REPLACE TABLE myjsontable(json variant);
COPY INTO myjsontable
FROM #my_stage/random.json.gz
FILE_FORMAT = (TYPE= 'JSON')
ON_ERROR = 'skip_file';
CREATE OR REPLACE TABLE save_copy_errors AS SELECT * FROM TABLE(VALIDATE(myjsontable, JOB_ID=>'enterid'));
SELECT * FROM SAVE_COPY_ERRORS;
//error for random: Error parsing JSON: invalid character outside of a string: '\\'
//no error for generated
SELECT * FROM Myjsontable;
REMOVE #My_stage pattern = '.*.csv.gz';
REMOVE #My_stage pattern = '.*.json.gz';
//yay your are done!
The put command copies the file from your local drive to the stage. You should do the put to the stage, not that table.
put file:///Users/<>/Documents/data/data.csv #my_stage;
The copy command loads it from the stage.
But in document its mention like it gets created by default for every stage
Each table has a Snowflake stage allocated to it by default for storing files. This stage is a convenient option if your files need to be accessible to multiple users and only need to be copied into a single table.
Table stages have the following characteristics and limitations:
Table stages have the same name as the table; e.g. a table named mytable has a stage referenced as #%mytable
in this case without creating stage its should load into default Snowflake stage allocated

Resources