Is there any way to force Snowflake to immediately purge a dropped, permanent table?
When a permanent table is dropped the fail-safe feature of Snowflake will ensure it can be "un-
dropped" for 7-days. This incurs storage costs for that 7-days until the data is purged.
I am aware that both temporary and transient tables do not have fail-safe, however the table(s) I need immediately purged on drop are permanent tables.
https://docs.snowflake.com/en/user-guide/tables-temp-transient.html#comparison-of-table-types
If this is not possible, would deleting all records in the table prior to dropping it help at all with storage costs?
The only way to avoid the fail-safe is to leverage temporary or transient tables. Deleting all of the records, first, doesn't help you.
Related
Usually auditing is done through triggers, however I have created temporal tables for my database, so that I can do data forensics as well auditing. I also have an auditing requirement, that is , if I were to restore my database to an earlier state, the audit data( history table) needs to be kept intact and only the original table needs to be restored. The history tables remain same regardless the restoration point.
One way that i can achieve this is turning off the system versioning off and then restore the original tables and again turn the system versioning on. This seems really hectic task to to for every tables since I got about 500-1000 tables that i need to audit.
One more query, If temporal tables aren't good for auditing ,should I go for traditional Triggers for data auditing then?
I have a stored procedure where I need to hold some data and process it.I am using a temp table (#temp) for this purpose. Now as temp table is stored in tempDb and this will cause an inter database communication so would it be better to create a normal table? Is there any other benefits of #temp table?
While communication with a temp table is technically an 'interdatabase' communication, tempdb is usually highly cached (or can be configured as such), so it's more likely to be a read from memory, then it is a read from disk.
By default, temp tables are also only accessible by the process that created it, which protects the data from being read by unauthorized users. Cleanup of the tables is managed automatically as well. It can also help reduce contention for resources in the working database.
From a practical standpoint, a regular database can be finely tuned to offer better performance than a #temp table, but for most purposes, a #temp table is going to be a more practical solution.
Tempdb overview:
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc31654.1550/html/sag1/X73131.htm
Tempdb Performance and Tuning:
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc00841.1502/html/phys_tune/X94507.htm
Please let me kow the efficient way of purging the data from transaction database without affecting the application performance.
You can use the alter table command to empty a table with "not logged inittialy"
http://pic.dhe.ibm.com/infocenter/db2luw/v10r1/topic/com.ibm.db2.luw.sql.ref.doc/doc/r0000888.html
You can also use the Truncate command
http://pic.dhe.ibm.com/infocenter/db2luw/v10r1/topic/com.ibm.db2.luw.sql.ref.doc/doc/r0053474.html
If you need to do this operation for several tables, you can generate the queries for delete the rows by querying the catalog.
I have two different database.
One of them, original database and another one is cache database.
This databases are in different location.
Ones a day, I must update cache database from original database.
And I must this update progress with a Web Service which is working on Original Database machine.
I can it with clear all Cache DB Tables and Insert Original Datas in every progress.
But I think is a Bad scenario.
So how can I this update progress with efficiency.
And have you any suggestion.
I'm pretty sure that there are DB syncing technologies out there, but since you already have the requirement, I'd recommend to use a change-log.
So, you'll have a "CHANGE_LOG" table, to which you insert rows whenever you do "writes" on your tables (INSERT,UPDATE,DELETE). Once a day, you can apply these changes one-by-one to the cache DB.
Deleting the change-log once it's applied is okay, but you can also confer "version" to the DBs. So each change to the DB will increment the version number. That can be used to manage more than one chache DBs.
To provide additional assurance for example, you can have a trigger in the cache DB that increment their own version numbers. That way, your process can inquire a cache DB and will know what changes must be applied, without maintaining that in the master DB (that way, hooking up a new cache DB, bringing up a crashed cache DB up to date is easy, too.).
Note that you probably need to purge the change log from time to time.
The way I see it you're going to have to grab all the data from the source database, as you don't seem to have any way of interrogating it to see what data has changed. A simple way to do it would be to copy all the data from the source database into temporary or staging tables in the cache database. Then you can do a diff between both sets of tables and update the records that have changed. Or once you have all the data in the staging tables drop/rename the existing tables and rename the staging tables to the existing table names.
This is really a two prong question.
One, I'm experiencing a phenomenon where SQL server consumes a lot of tempDB log file space when using a global temp table while using a local temp table will consume data file space?
Is this normal? I can't find anywhere on the web where it talks about consuming log file space in such a way when using global temp tables vs. local temp tables.
Two, if this is expected behavior, is there any way to tell it not to do this :). I have plenty of data space (6 GB), but my log space is restricted (750 MB with limited growth). As usual, the tempDB is setup with Simple Recovery so running into the log file space limit has never been a problem before ... but I've never used global temp tables like I'm using them before either.
Thanks!! Joel
When either form of temporary table is created (local or global) the table is physically created and stored in the tempdb database. Any transactional activity on these tables is therefore logged in the tempdb transaction log file.
There is no setting per say however, you could implement a physical table as opposed to a temporary table in order to store your data within a user database, thereby using the associated data and transaction log file for this database.
If you really want to get stuck in and learn about the tempdb database take a look at the following resources.
Everyning you ever wanted to know about the tempdb database
What is the lifespan of one of these global temp tables? Are they dropped in a reasonable time? "Regular" temp tables get dropped when the user disconnects, if not manually before then, and "Global" (##) temp tables get dropped, if memory serves, when the creating session ends. I can see the log growing if the global temp tables last for a long time, because it could be that the log records governing the temp table activity are still marked as active log records and don't get freed with log backup (full recovery) or checkpoints (simple)
The length of the session will have an impact as noted above.
Also, temporary tables work within transactions and table variable work outside the context of transactions. Because of this the temporary will log entries in the log file that relate to the updates to the use of the table.