I'm new to Snowflake and I'm not so strong in using SQL but my question is if it is possible to perform a (POST) request from Snowflake to the Azure Blob Service REST API in order to obtain a User Delegation Key. Can this be easily done?
In Snowflakes documentation I read about external functions which may could be used to execute some kind of script for acquiring a User Delegation Key but it seems to be a hassle to set this all up (see https://docs.snowflake.com/en/sql-reference/external-functions-creating-azure.html)
If not, do you by any chance have an idea on how to design a managable process in order to obtain access to an Azure Blob Storage via Azure AD user credentials?
Related
We plan to load sensitive PII data from an Azure Blob Storage (ADLS Gen2) into snowflake using an external stage which is secured by Azure credentials ( service principal) for the container where the data is stored.
However this is not an acceptable solution to the cyber team . The use Encryption key was considered. However the key issues that were raised was the the
Complete container that stages the PII data could be potentially be exposed.
Allowing the Snowflake VNet subnet IDs
Hence I am looking at any best practices or any further suggestions anyone may have that they use when using Azure External Stages to load into Snowflake
Complete container that stages the PII data could be potentially be exposed.
You may want to make sure the PII data is encrypted in Azure Blob Storage. Snowflake support ingesting Client-side Encrypted Data into Snowflake. You can read more here:
https://docs.snowflake.com/en/user-guide/security-encryption-end-to-end.html#ingesting-client-side-encrypted-data-into-snowflake
And this document discussed on how to create a stage with client-side encryption:
https://docs.snowflake.com/en/sql-reference/sql/create-stage.html#external-stage-parameters-externalstageparams
Allowing the Snowflake VNet subnet IDs
You can add Snowflake VNet subnet IDs to the network rule, restricting access only to the Snowflake VNet subnet IDs, more on this here:
https://docs.snowflake.com/en/user-guide/data-load-azure-allow.html
We have a web api developed in .Net Core 3.1 which talks to Azure SQL db and running as Azure web app. The database a single database of a multi-tenant app and is protected by row-level security. It requires to set session context before executing any SQL statement. The session context is the primary key of tenant table which is an integer.
I've learned that I can use EF Core Interceptors and set session context. However for security reasons we cannot send/receive tenant id in the URL as a parameter hence we are using another identifier which looks like an encrypted string.
Considering we have a tenant identifier what is the most efficient way to set session context as tenant id? The API is stateless so I can't use session and the controller doesn't require authentication so I don't have a logged in user either. The last option and probably the ugliest way would be to hardcode and maintain a list at server side so that I don't have do a database trip every time.
Considering comments from #JeroenMostert I've decided to do a database trip as it won't be expensive because the data will not going back to client. But after getting some good knowledge and understanding I'll certainly consider using memory optimize table.
So I have an Azure SQL Database instance that I need to run a nightly data import on, and I was going to schedule a stored procedure to make a basic GET request against an API endpoint, but it seems like the OLE object isn't present in the Azure version of SQL Server. Is there any other way to make an API call available in Azure SQL Database, or do I need to put in place something outside of the database to accomplish this?
There are several options. I do not know whether a powershell job as stated in the first comment to your question can execute http requests but I do know at least a couple of options:
Azure Data Factory allows you to create scheduled pipelines to copy/transform data from a variety of sources (like http endpoints) to a variety of destinations (like azure sql databases). This involves no or a little bit of scripting.
Azure Logic Apps allows you to do the same:
With Azure Logic Apps, you can integrate (cloud) data into (on-premises) data storage. For instance, a logic app can store HTTP request data in a SQL Server database.
Logic apps can be triggered by a schedule as well and involves none or little scripting
You could also write an Azure Function that is executed on a schedule and calls the http endpoint and write the result to the database. Multiple languages are supported for writing functions, like c# and powershell for example.
All those options include the possibility to force an execution outside the schedule.
In my opinion Azure Data Factory (no coding) or an Azure Function (code only) are the best options given the need to parse a lot of json data. But do mind that Azure Functions on a Consumption Plan have a maximum execution time allowed of 10 minutes per invocation.
I need to connect Angular with redshift for historical reporting. Can it be achieved, what are the prerequisites ?
This is possible in theory using the Redshift Data API, but you should consider whether you truly want your client machine to be writing and executing SQL commands directly to Redshift.
To allow this the following are true:
The client machine will send the SQL to be executed, a malicious actor could modify this so permissions would be important.
You would need to generate IAM credentials via a service like Cognito to directly interact with the API.
It would be more appropriate to create an API that can directly communicate with Redshift offering protection on the SQL that can be executed.
This could use API Gateway and Lambda to keep it simple, with your frontend calling this instead of directly writing the SQL.
More information is available in the Announcing Data API for Amazon Redshift post.
What is the best option for a windows application that uses SQL server authentication? Should I create a single SQL account and manage the users inside the application (using a users table). Or should I create a SQL server account for each user. What is your experience? Thank you!
Depends on whether the username/password for the SQL server would be exposed to the user, and whether that would be a problem. Generally for internal apps (in smaller organisations), one would trust the users not too log in directly to the sql server. If you have a middleware layer (ie webservices) the password can be hidden from user.
I prefer to use a general login for the DB and manage users in the application. Even if you do create a login to sql for each application user, they could still connect directly, so why not just use a generic sql login that is easier to manage. This is of course assuming all users have the same accesses.
One good practice, if the users potentially can get direct access to the db, would be to grant access only through Stored Procedures and not directly to tables, so that only certain actions can be performed. Steer away from writing business logic or security checks (except basic ones) within the stored procs.
One way I would solve your problem is to write some webservices that check security and does your CRUD (via datasets, etc), but again it depends on the app and environment.
In summary if you have a middle layer or all users have the same access manage the user within the application and use a single user login. Otherwise use a login per user or role.
One option that I have used in the past is to use the ASP.NET Membership Provider. It makes authentication a breeze to use. The only drawback that I saw was that it added a bunch of tables to your database.
The code for using it is very straight-forward.
Here's a blog post about using this in a Windows app. http://msmvps.com/blogs/theproblemsolver/archive/2006/01/12/80905.aspx Here's another article with more details. http://www.c-sharpcorner.com/UploadFile/jmcfet/Provider-basedASP.NET10162006104542AM/Provider-basedASP.NET.aspx
Here's another article that talks about using it with Windows applications: http://www.theproblemsolver.nl/usingthemembershipproviderinwinforms.htm
Google for "ASP.NET 2.0 Membership Provider", and you will get plenty of hits.
What about having SQL accounts based on the level of permissions needed for the task. For example you could have a read only account just used for reporting if your system has a lot of reporting. You would also need an account what has write access for people to change their passwords and other user admin tasks.
If you have situations where certain users are only going to have access to certain data I would have separate accounts for that data. The problem with using 1 account is you are saying that there is no SQL injection anywhere in your application. That is something everyone would strive for but sometimes perfect security is not possible, hence the multi-pronged approach.