Snowflake ships with a number of databases - which of the following are ok to delete and which are critical for operations to retain?
image showing databases DEMO_DB, UTIL_DB, SNOWFLAKE, and SNOWFLAKE_SAMPLE_DATA
Thanks, Jason
You can delete all of them except for SNOWFLAKE. Snowflake the company/service is actually the owner of the SNOWFLAKE database so you couldn't delete that even if you tried, but you can delete all other databases using the right ROLE without any issues as they just have sample data in them.
The SNOWFLAKE database is extremely useful as it keeps a history of all Queries and other activity within your account.
Related
I was working on Azure Data Studio. By mistake I created a table in the system administrator's database. I want to transfer it to another database which is created by me. How can I transfer that table?
With Azure Data Studio, we can't transfer the table to another database directly. Azure SQL database also doesn't support USE statement. And Azure Data Studio doesn't support import or export jobs. The only way are that:
Create the table with data again in user DB again an then delete is
in System administration's database
Elastic query(CREATE EXTERNAL DATA SOURCE,EXTERNAL TABLE) to cross database query the table data in System
administration's database, and then import it to your user DB.
I tested in Azure Data Studio and it works well. Since you can create table in System administration's database, I think you have enough permission to do this operations.
If you can use SSMS, the could be much easier and there are many ways can achieve it. For example:
Ref the blog:
https://blog.atwork.at/post/How-to-copy-table-data-between-Azure-SQL-Databases
#Json Pan provide in comment.
Export the table into a csv file and then import to user DB.
You also can use Elastic query.
Just choose the way you like.
I unable to create objects (views, file format, stage etc.. ) in a shared sample database (SNOWFLAKE_SAMPLE_DATA).
Kindly let me know, what is the possible way to get access the data?
Regards,
DB
The SNOWFLAKE_SAMPLE_DAT database contains a schema for each data set, with the sample data stored in the tables in each schema. You can execute queries on the tables in these databases just as you would any other databases in your account.
The database and schemas do not utilize any data storage so they do not incur storage charges for your account.
however, just as with other databases, executing queries requires a running, current warehouse for your session, which consumes credits.
You can refer to snowflake documentation: DOCS » USING SNOWFLAKE » SAMPLE DATASETS.
Hope this helps answer your question.
Shared databases are read-only. Users in a consumer account can view/query data, but cannot insert or update data, or create any objects in the database. This is why you can not create any objects on the shared database (SNOWFLAKE_SAMPLE_DATA).
https://docs.snowflake.com/en/user-guide/data-share-consumers.html#general-limitations-for-shared-databases
You can query the data in shared database like any other database.
https://docs.snowflake.com/en/user-guide/data-share-consumers.html#querying-a-shared-database
. I have two databases in same azure sql server .i want that both database interact to each other using trigger. i.e If any record is inserted in Customer table of first database the trigger gets fired and record is inserted in another database.
We had / have the same problem with triggers that we use for insert-update-delete where we write a record to Database-1 that has the primary table, but also updates Database-2 where we hold "archive" versions of the tables.
The only solution we have identified and are testing is to bring all of the tables into a single database and separate the different tables under separate database schemas in the one database.
Analysis so far of this approach looks promising.
I think what you're trying to do is not allowed in Sql Azure. From my expertise what you are trying to do is a bad practice on-premise as well (think backups-restore and availability issue scenarios).
You should move the dependency in the application and have the application update both databases, as appropriate.
Anyway, if you want to continue with this approach please take a look over Elastic Query feature: https://learn.microsoft.com/en-in/azure/sql-database/sql-database-elastic-query-overview
Please let me know if I can help with something
so I'm in the position of using a cache database. Not my decision, I'm coming into the project with the view of it's a database, so all the naysayers please be respectable. There's over 24 million rows per year added to this database so I'm looking for a way to do history on insert/update/delete. In sql server we would create a database model, then run a tool to generate history tables in another database, and triggers to insert/update/delete. e.g. [MyDatabase].[dbo].[Address], [MyDatabaseHistory].[dbo].[AddressHistory]...you get the idea...anyone out there with experience doing a similar thing to a cache database?
In Caché you can also use triggers, please see in documentation
Supposing we have a web application, which uses a SQL Server 2005 server database, would it be better for performance to move all our custom Log tables to a specific catalog?
Scenario
Our web application today uses different catalogs from SQL Server. Each catalog have tables related to a problem (domain/subject): db_financial, db_corporative, etc.
These catalogs already have many different log tables, to register a history of changes made by users during application usage: tb_log_product, tb_log_customer, tb_log_provider_prices, etc.
The goal
The goal is to know if there is any advantage on moving log tables to a specific catalog.
These log tables can have lots of data, so I was wandering if it is a nice idea to move all of them to a different catalog such as db_log (or if I must keep the log tables in the catalogs they are now).
Logs are mostly used for auditing purposes and to keep history of what-happened and who-dun-it. If you have a database called db_operations and table such as tb_customer, I recommend that your log-table tb_log_customer be in the same database (db_operations).
Keeping them in the same database will allow you to take backups of both customer and customer-log table as a single unit of work. If your log was in a different database such as db_logs, you would have to back up db_operations and db_logs at the same time and still not get a pristine restore. Same issue applies to log shipping and mirroring techniques.
To manage the log tables, I'd recommend creating filegroup(s). Log tables can go on these filegroup(s) and the path for the filegroup can be a different volume/controller. To manage the size of the log files, I propose deleting history after a certain period of time. I'd recommend taking a look at partitioning as well.