We are building a system in our company which will need temporal tables in sql server and might need log shipping as well. I wanted to know if there are any unexpected impacts of log shipping on a temporal table that wouldn't happen on a normal table?
I would expect no impact in either direction (that is, log song won't change the temporal table nor will the temporal table change log shipping). At its core, log shipping is just restoring transaction logs on another server. And temporal tables are (more or less) a trigger that maintains another table on data mutations. That extra work will be present in the log backup and will restore just fine at the log shipping secondary
Previously in the company we used temporary tables, but when the volume of business data had to grow, we had to change the queries using WITH
WITH table_alias AS (SELECT ...)
Related
I need some light here. I am working with SQL Server 2008.
I have a database for my application. Each table has a trigger to stores all changes on another database (on the same server) on one unique table 'tbSysMasterLog'. Yes the log of the application its stored on another database.
Problem is, before any Insert/update/delete command on the application database, a transaction its started, and therefore, the table of the log database is locked until the transaction is committed or rolled back. So anyone else who tries to write in any another table of the application will be locked.
So...is there any way possible to disable transactions on a particular database or on a particular table?
You cannot turn off the log. Everything gets logged. You can set to "Simple" which will limit the amount of data saved after the records are committed.
" the table of the log database is locked": why that?
Normally you log changes by inserting records. The insert of records should not lock the complete table, normally there should not be any contention in insertion.
If you do more than inserts, perhaps you should consider changing that. Perhaps you should look at the indices defined on log, perhaps you can avoid some of them.
It sounds from the question that you have a create transaction at the start of your triggers, and that you are logging to the other database prior to the commit transaction.
Normally you do not need to have explicit transactions in SQL server.
If you do need explicit transactions. You could put the data to be logged into variables. Commit the transaction and then insert it into your log table.
Normally inserts are fast and can happen in parallel with out locking. There are certain things like identity columns that require order, but this is very lightweight structure they can be avoided by generating guids so inserts are non blocking, but for something like your log table a primary key identity column would give you a clear sequence that is probably helpful in working out the order.
Obviously if you log after the transaction, this may not be in the same order as the transactions occurred due to the different times that transactions take to commit.
We normally log into individual tables with a similar name to the master table e.g. FooHistory or AuditFoo
There are other options a very lightweight method is to use a trace, this is what is used for performance tuning and will give you a copy of every statement run on the database (including triggers), and you can log this to a different database server. It is a good idea to log to different server if you are doing a trace on a heavily used servers since the volume of data is massive if you are doing a trace across say 1,000 simultaneous sessions.
https://learn.microsoft.com/en-us/sql/tools/sql-server-profiler/save-trace-results-to-a-table-sql-server-profiler?view=sql-server-ver15
You can also trace to a file and then load it into a table, ( better performance), and script up starting stopping and loading traces.
The load on the server that is getting the trace log is minimal and I have never had a locking problem on the server receiving the trace, so I am pretty sure that you are doing something to cause the locks.
Usually auditing is done through triggers, however I have created temporal tables for my database, so that I can do data forensics as well auditing. I also have an auditing requirement, that is , if I were to restore my database to an earlier state, the audit data( history table) needs to be kept intact and only the original table needs to be restored. The history tables remain same regardless the restoration point.
One way that i can achieve this is turning off the system versioning off and then restore the original tables and again turn the system versioning on. This seems really hectic task to to for every tables since I got about 500-1000 tables that i need to audit.
One more query, If temporal tables aren't good for auditing ,should I go for traditional Triggers for data auditing then?
I have SQL Server 2014, standard edition. We have several tables where we delete data from, then re-insert it under different primary keys (to merge records for two people in our system that are actually the same). All these changes are performed with a T-SQL transaction.
I understand how transactions and rollbacks work, but what I need is more of an audit/rollback since my users may need to rollback just this transaction only at a later date (not restoring the whole database or table). "Change Data Capture" is not an option since I only have standard edition.
My real question lies in how to store this auditing information. I imagine I'll need a unique key to keep track of this being one unit of work so all these table changes get tied to same group as far as the user is concerned. But if I have a DELETE WHERE ID = #ID query for example, how do I store all these deleted records before deleting so that I can re-insert them later if needed? I'm fine with even storing a large rollback T-SQL script of some kind, I'm just not sure how to generate INSERT scripts that I can store and run later for data that I'm about to delete.
I'm open to any ideas, I just need an architecture that's generic enough to handle multiple tables and the ability to rollback deletions and insertions. I care more about the rollback ability than keeping a pretty audit table.
You can not do that out of the box as even with full logging you can roll back an entire database to a point in time but not specific transactions.
You will have to code something for un-doing transactions but I believe simple audit triggers will give you the data you need to make it happen. Here is a good article to get you started.
https://www.mssqltips.com/sqlservertip/4055/create-a-simple-sql-server-trigger-to-build-an-audit-trail/
Primary DB have all the raw data every 10 minutes, but it only store for 1 week. I would like to keep all the raw data for 1 year in another DB, and it is different server. How can it possible?
I have created T-query to select the required data from Primary DB. How can it keep update the data from primary DB and insert to secondary DB accordingly? The table has Datetime, would it able to insert new data for latest datetime?
Notes: source data SQL 2012
secondary db SQL 2005
If you are on sql2008 or higher the merge command (ms docs) may be very useful in your actual update process. Be sure to you understand it.
You table containing the full year data sounds like it could be OLAP, so I refer to it that way occasionally (if you don't know what OLAP is, look it up sometime, but it does not matter to this answer)
If you are only updating 1 or 2 tables, log shipping replication and failover may not work well for you, especially since you are not replicating the table due to different retention policies if nothing else. So make sure you understand how replication, etc. work before you go down that path. If these tables are over perhaps 50% of the total database, log shipping style methods might still be your best method. They work well and handle downtime issues for you -- you just replicate the source database to the OLAP server and then update from the duplicate database into your OLAP database.
Doing an update this every second is an unusual requirement. However, if you create a linked server, you be able to insert your selected rows into a staging table on the remote sever and them update from them to your OLAP table(s). If you can reliably update your OLAP table(s) on the remote server in 1 second, you have a potentially useful method. If not, you may fall behind on posting data to your OLAP tables. If you can update once a minute, you may find you are much less likely to fall behind on the update cycle (at the cost of being slightly less current at all times).
You want to consider putting after triggers on the source table(s) that copies the changes to a staging table (still on the source database) into staging table(s) with an identity on this staging table along with a flag to indicate Insert, Update or Delete and you are well positioned to ship updates for one or a few tables instead of the whole database. You don't need to requery your source database repeatedly to determine what data needs to be transmitted, just select top 1000 from from your staging table(s) (order by the staging id) and move them to the remote staging table.
If your fall behind, a top 1000 loop keeps from trying to post to much data in any one cross server call.
Depending on your data, you may be able to optimize storage and reduce log churn by not copying all columns to your staging table, just the staging id and the primary key of the source table and pretend that whatever data is in the source record at the time you post it to the OLAP database accurately reflects the data at the time the record was staged. It won't be 100% accurate on your OLAP table at all times, but it will be accurate eventually.
Cannot over emphasize that you need to accommodate the downtime in your design -- unless you can live with data loss or just wrong data. Even reliable connections are not 100% reliable.
In PostgreSQl 9.1, the streaming replication is done by streaming WALs records which is generated by UPDATEs and DELETEs on the Master’s data.
How are the DDL changes replicated? Those are not the part of WALs.
Postgresql's Write Ahead Log (WAL) does contain DDL. In Postgresql, DDL is transactional, just like DML. All goes through the WAL.
See http://wiki.postgresql.org/wiki/Transactional_DDL_in_PostgreSQL:_A_Competitive_Analysis
To elaborate on Colin's answer, almost everything goes through the write-ahead log. It is a block level journal that records every write that will be made to any database structure. Every change to any part of the data directory is first recorded in the WAL. That's because the primary purpose of the WAL is to allow replay of changes if the system crashes or loses power, so it needs to record every single planned disk write.
In PostgreSQL, tables, views, etc are just entries in the system catalog tables. Changes to these catalogs get write-ahead logged along with everything else. The same is true of database creation; a db is just an entry in pg_database and the corresponding directory structure.
Changes to tables made by VACUUM, CLUSTER, TRUNCATE etc; they all go through WAL, either with block-level change records or special WAL entries to describe the operation.
Only a few non-durable things don't go through WAL, like:
changes to UNLOGGED and TEMPORARY tables
Temp files for on-disk sorts