We have a database a that is replicated to a subscriber db b (used for SSRS reporting) every night at 2.45 AM.
We need to add a column to one of the replicated tables since it's source file in our iSeries is having a column added that we need to use in our SSRS reporting db.
I understand (from Making Schema Changes on Publication Databases) and the answer here from Damien_The_Unbeliever) that there is a default setting in SQL Server Replication whereby if we use a T-SQL ALTER TABLE DDL statement to add the new column to our table BUPF in the PION database, the change will automatically propagate to the subscriber db.
How can I check the replication of schema changes setting to ensure that we will have no issues with the replication following making the change?
Or should I just run ALTER TABLE BUPF ADD Column BUPCAT Char(5) NULL?
To add a new column to a table and include it in an existing publication, you'll need to use ALTER TABLE < Table > ADD < Column > syntax at the publisher. By default the schema change will be propagated to subscribers, publication property #replicate_ddl must be set to true.
You can verify if #replicate_ddl is set to true by executing sp_helppublication and inspecting the #replicate_ddl value. Likewise, you can set #replicate_ddl to true by using sp_changepublication.
See Making Schema Changes on Publication Databases for more information.
We have a moderately-sized SSDT project (~100 tables) that's deployed to dozens of different database instances. As part of our build process we generate a .dacpac file and then when we're ready to upgrade a database we generate a publish script and run it against the database. Some db instances are upgraded at different times so it's important that we have a structured process for these upgrades and versioning.
Most of the generated migration script is dropping and (re)creating procs, functions, indexes and performing any structural changes, plus some data scripts included in a Post-Deployment script. It's these two data-related items I'd like to know how best to structure within the project:
Custom data migrations needed between versions
Static or reference data
Custom data migrations needed between versions
Sometimes we want to perform a one-off data migration as part of an upgrade and I'm not sure the best way to incorporate this into our SSDT project. For example, recently I added a new bit column dbo.Charge.HasComments to contain (redundant) derived data based on another table and will be kept in sync via triggers. An annoying but necessary performance improvement (only added after careful consideration & measurement). As part of the upgrade the SSDT-generated Publish script will contain the necessary ALTER TABLE and CREATE TRIGGER statements, but I also want to update this column based on data in another table:
update dbo.Charge
set HasComments = 1
where exists ( select *
from dbo.ChargeComment
where ChargeComment.ChargeId = Charge.ChargeId )
and HasComments = 0
What's the best way to include this data migration script in my SSDT project?
Currently I have each of these types of migrations in a separate file that's included in the Post-Deployment script, so my Post-Deployment script ends up looking like this:
-- data migrations
:r "data migration\Update dbo.Charge.HasComments if never populated.sql"
go
:r "data migration\Update some other new table or column.sql"
go
Is this the right way to do it, or is there some way to tie in with SSDT and its version tracking better, so those scripts aren't even run when the SSDT Publish is being run against a database that's already at a more recent version. I could have my own table for tracking which migrations have been run, but would prefer not to roll-my-own if there's a standard way of doing this stuff.
Static or reference data
Some of the database tables contain what we call static or reference data, e.g. list of possible timezones, setting types, currencies, various 'type' tables etc. Currently we populate these by having a separate script for each table that is run as part of the Post-Deployment script. Each static data script inserts all the 'correct' static data into a table variable and then inserts/updates/deletes the static data table as needed. Depending on the table it might be appropriate only to insert or only insert and delete but not to update existing records. So each script looks something like this:
-- table listing all the correct static data
declare #working_data table (...)
-- add all the static data that should exist into the working table
insert into #working_data (...) select null, null null where 1=0
union all select 'row1 col1 value', 'col2 value', etc...
union all select 'row2 col1 value', 'col2 value', etc...
...
-- insert any missing records in the live table
insert into staticDataTableX (...)
select * from #working_data
where not exists ( select * from staticDataTableX
where [... primary key join on #working_data...] )
-- update any columns that should be updated
update staticDataTableX
set ...
from staticDataTableX
inner join #working_data on [... primary key join on #working_data...]
-- delete any records, if appropriate with this sort of static data
delete from staticDataTableX
where not exists ( select * from staticDataTableX
where [... primary key join on #working_data...] )
and then my Post-Deployment script has a section like this:
-- static data. each script adds any missing static/reference data:
:r "static_data\settings.sql"
go
:r "static_data\other_static_data.sql"
go
:r "static_data\more_static_data.sql"
go
Is there a better or more conventional way to structure such static data scripts as part of an SSDT project?
To track whether or not the field has already been initialized, try adding an Extended Property when the initialize is performed (it can also be used to determine the need for the initialize):
To add the extended property:
EXEC sys.sp_addextendedproperty
#name = N'EP_Charge_HasComments',
#value = N'Initialized',
#level0type = N'SCHEMA', #level0name = dbo,
#level1type = N'TABLE', #level1name = Charge,
#level2type = N'COLUMN', #level2name = HasComments;
To check for the extended property:
SELECT objtype, objname, name, value
FROM fn_listextendedproperty (NULL,
'SCHEMA', 'dbo',
'TABLE', 'Charge',
'COLUMN', 'HasComments');
For reference data, try using a MERGE. It's MUCH cleaner than the triple-set of queries you're using.
MERGE INTO staticDataTableX AS Target
USING (
VALUES
('row1_UniqueID', 'row1_col1_value', 'col2_value'),
('row2_UniqueID', 'row2_col1_value', 'col2_value'),
('row3_UniqueID', 'row3_col1_value', 'col2_value'),
('row4_UniqueID', 'row4_col1_value', 'col2_value')
) AS Source (TableXID, col1, col2)
ON Target.TableXID = Source.TableXID
WHEN MATCHED THEN
UPDATE SET
Target.col1 = Source.col1,
Target.col2 = Source.col2
WHEN NOT MATCHED BY TARGET THEN
INSERT (TableXID, col1, col2)
VALUES (Source.TableXID, Source.col1, Source.col2)
WHEN NOT MATCHED BY SOURCE THEN
DELETE;
I just tried inserting value in to a database and that work. Now I insert again and I get an error for identical primary key.
I can't find any option to alter it to be auto-increment.
I'm updating the table via Linq-To-Sql.
User u = new User(email.Text, HttpContext.Current.Request.UserHostAddress,
CalculateMD5Hash(password.Text));
db.Users.InsertOnSubmit(g);
db.SubmitChanges();
I didn't fill in the user_id and it worked fine the first time. It became zero.
Trying to add a second user, it wants to make the ID 0 again.
I could query the database and ask for the highest ID, but that's going to far if you know about auto-increment.
How can I turn this on? All I can find are scripts for table creation. I'd like to keep my existing table and simply edit it.
How is your Linq-to-SQL model defined?? Check the properties of the user_id column - what are they set to??
In your Linq-to-SQL model, be sure to have Auto Generated Value set to true, Auto-Sync set to OnInsert, and the server data type should also match your settings (INT IDENTITY),
In SQL Server Management Studio, you need to define the user_id column to be of type INT IDENTITY - in the visual table designer, you need to set this property here:
It is zero because you have a integer for a primary key column type. To use auto-increment, set tables identity column to the ID (selected in the table properties)
Would probably be easier to edit the database using VS if you have a version that will work for, otherwise if you have to edit it in management studio see this article:
http://blogs.msdn.com/b/sqlexpress/archive/2006/11/22/connecting-to-sql-express-user-instances-in-management-studio.aspx
Or you can increment the user_id manually and pass it to the insert function if you cannot alter the property/table field description
I have a database FooDb with a schema BarSchema that contains a table Tbl (i.e. FooDb.BarSchema.Tbl)
I am also logged in as a user with BarSchema as default.
This query works fine
SELECT * FROM FooDb..Tbl
I also have a synonym for this table in another db
CREATE SYNONYM TblSynonym FOR FooDb..Tbl
But now I get an error "Invalid object name 'FooDb..Tbl'" when executing
SELECT * FROM TblSynonym
If i change the synonym to
CREATE SYNONYM TblSynonym FOR FooDb.BarSchema.Tbl
it works fine.
Why doesn't the default schema work in synonyms?
(The background is that I'm consoldating data from several databases which all got same table names but different schema names. It would be a lot easier if I could set the default schema for each database on the user and then ignore it everywhere in the script)
The documentation suggests the db..tbl syntax should work:
schema_name_2 Is the name of the
schema of the base object. If
schema_name is not specified the
default schema of the current user is
used.
This works for me in SQL Server 2008:
create synonym TestSynonym for TestDB..TestTable
One cause might be that the default schema is associated with the user, not the database. Check if your user has an unexpected default schema? In my SSMS, that setting is located under Database -> Security -> Users -> Properties.
I've been asked to create a simple DataGrid-style application to edit a single table of a database, and that's easy enough. But part of the request is to create an audit trail of changes made, who made them, and the date/time.
How might you solve this kind of thing?
(I'll be using C# in VS2008, ADO.NET connected to SQL Server 2005, WPF and Xceed's DataGrid, if it makes any difference.)
There are two common ways of creating audit trails.
Code your data access layer.
In the database itself using triggers.
There are advantages and disadvantages to both. Some people prefer one over the other. It's often down to the type of app and the type of database use you can expect.
If you do it in your DA layer it's pretty much up to you. You just need to add code to every method that saves to the database to also save a log of the changes. This auditing code could be in your DA layer code, or even in your stored procs in your database if you are using stored procs for everything. Essentially the premise is the same, any time you make a change to the database, log that change.
If you want to go down the triggers route, you can write custom triggers for each table, or fashion a more generic trigger that works the same on lots of tables. Check out this article on audit triggers. This works by firing of triggers whenever a change is made, and the triggers log the changes. Remember that if you want to audit SELECT statements, you can't use triggers, you'll have to do that with in code/stored proc auditing. It's also worth remember that depending on your database, triggers may not fire in all circumstances. For example, most databases don't fire triggers during TRUNCATE statements. Check that your triggers get fired in any case that you need auditing.
Alternately, you could also take a look at using the service broker to do async auditing on a dedicated machine. This is more complex and takes a bit of configuring to set up.
Which ever way you do it you need to decide on the format the audit log will take. Normally you would save this log in your database, but you could just save it in a log file or whatever suits your requirements. You could use a single audit table that logs all changes, or you could have an audit table per main table being audited. For large scale implementations you could even consider putting the audit tables in a totally separate database. If your logging into a table, it's common to have a "change type" field which indicates if the audited change was an insert, update or delete style of change, along with the changed data, user who made the change and the date/time the change was made. Don't forget to include the old and new data for update style changes.
Ditto use triggers.
Anyone considering soft deletion should have a read of Richard Dingwall's The trouble with soft delete.
Most universal method would be to create another table for storing versions of record from the first table. Then, you can remove all the data from main table. Suppose you need versioning of a table Person(PersonId, Name, Surname):
CREATE TABLE Person
(
PersonId INT, // PK
CurrentPersonVersion INT // FK
);
CREATE TABLE PersonVersion
(
PersonVersionId INT, // PK
PersonID // FK
Name VARCHAR, // actual data
Surname VARCHAR, // actual data
ChangeDate // logging data
ChangeAuthor // logging data
)
Now any change requires inserting new PersonVersion and updating the CurrentPersonVersionID.
The best way to do this is set up triggers in the database that write to audit tables.
Solution 1: SQL Server Change Data Capture
https://learn.microsoft.com/en-us/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server?view=sql-server-2017
First you need to enable change data capture on your database
USE AdventureWorks2012
GO
EXEC sys.sp_cdc_enable_db
GO
Then you can query the changes using fn_cdc_get_all_changes_ or fn_cdc_get_net_changes_.
-- ========
-- Enumerate All Changes for Valid Range Template
-- ========
USE AdventureWorks2012;
GO
DECLARE #from_lsn binary(10), #to_lsn binary(10);
SET #from_lsn = sys.fn_cdc_get_min_lsn('HR_Department');
SET #to_lsn = sys.fn_cdc_get_max_lsn();
SELECT * FROM cdc.fn_cdc_get_all_changes_HR_Department
(#from_lsn, #to_lsn, N'all');
Solution 2: SQL Server Database Auditing
Source : https://www.dbaservices.com.au/how-to-configure-sql-server-auditing/
ENABLE DATABASE AUDITING
Database auditing requires that a server audit (although not necessarily server audit specification) to be in place. The DB auditing however is created within the user database that is to be audited, rather than within the master database where the server audit gets created. Database audit specifications can be found within the DB itself under Security –> Database Audit Specifications.
To create a database audit, you’ll need to first USE the database (to select it), then the following provides an example syntax for auditing SELECT, UPDATE and DELETE operations for specific tables within that database;
USE UserDatabase
GO
CREATE DATABASE AUDIT SPECIFICATION [User_Database_Audit_Specification]
FOR SERVER AUDIT [SQL_Server_Audit]
ADD (SELECT , UPDATE , DELETE ON UserDatabase.dbo.Customer_DeliveryAddress BY dbo )
,ADD (SELECT , UPDATE , DELETE ON UserDatabase.dbo.DimCustomer_Email BY dbo )
,ADD (SELECT , UPDATE , DELETE ON UserDatabase.dbo.DimCustomer_Phone BY dbo )
WITH (STATE = ON) ;
GO
The SELECT, UPDATE and DELETE operations aren’t the only things you can add to the audit specification though…
+------------+-------------------------------------------------------------------+
| Action | Description |
+------------+-------------------------------------------------------------------+
| SELECT | This event is raised whenever a SELECT is issued. |
| UPDATE | This event is raised whenever an UPDATE is issued. |
| INSERT | This event is raised whenever an INSERT is issued. |
| DELETE | This event is raised whenever a DELETE is issued. |
| EXECUTE | This event is raised whenever an EXECUTE is issued. |
| RECEIVE | This event is raised whenever a RECEIVE is issued. |
| REFERENCES | This event is raised whenever a REFERENCES permission is checked. |
+------------+-------------------------------------------------------------------+
The full list of database events you can log is available here:
https://learn.microsoft.com/en-us/sql/relational-databases/event-classes/security-audit-event-category-sql-server-profiler?view=sql-server-2017
I was recently faced with a requirement to audit some tables and I opted to use triggers. Like others, I only wanted to see entries in the audit table for those fields that had actually changed, however, when updating the tables, the application was updating all the fields in row whether they'd changed or not, therefore, checking whether the fields had been updated or not availed me nothing - they all had!
What I wanted, therefore, was a method of checking the actual value in each field to see if it had changed or not and only writing it to the audit table if it had. Having been unable to find any solution to this conundrum anywhere, I came up with my own, as follows:
CREATE TRIGGER [dbo].[MyTable_CREATE_AUDIT]
ON [dbo].[MyTable]
AFTER UPDATE
AS
INSERT INTO MyTable_Audit
(ItemID,LastModifiedBy,LastModifiedDate,field1,field2,field3,
field4,field5,AuditDate)
SELECT i.ItemID,i.LastModifiedBy,i.LastModifiedDate,
field1 =
CASE i.field1
WHEN d.field1 THEN NULL
ELSE i.field1
END,
field2 =
CASE i.field2
WHEN d.field2 THEN NULL
ELSE i.field2
END,
field3 =
CASE i.field3
WHEN d.field3 THEN NULL
ELSE i.field3
END,
field4 =
CASE i.field4
WHEN d.field4 THEN NULL
ELSE i.field4
END,
field5 =
CASE i.field5
WHEN d.field5 THEN NULL
ELSE i.field5
END,
GETDATE()
FROM inserted i
INNER JOIN deleted d
ON i.ItemID = d.ItemID
As you can see, I'm comparing the values of each field in the deleted and inserted tables and only writing the field value from the inserted table to the audit table if they differ, otherwise I just write NULL.
It certainly works for me. Can anyone see any issues with this approach? My team own both the application and the database so possible curved balls like schema changes are covered off.
The other way of doing this apart from triggers is this,
Have four columns, UpdFlag, DelFlag, EffectiveDate and TerminatedDate for each table you want to do an audit trail on.
code your sproc's in such a way that when you do an update, to pass in the all of the row's column data into the sproc, update the row by setting the TerminatedDate to the date that was updated, and mark the UpdFlag and to put in the datetime into the column
Then create a new row with the new data (which is really updated). and put in a new date now for the EffectiveDate and the TerminatedDate set to the max date.
Likewise if you want to do a deletion of the row, simply update the row by marking the DelFlag as set, the TerminatedDate with the datetime now. You are in effect doing a soft delete and not an actual sql's Delete.
In that way, when you want to audit the data, and to show a trail of the changes, you can simply filter the rows for those that have the UpdFlag set, or between EffectiveDate and TerminatedDate. Likewise for those that were deleted, you filter for those that have the DelFlag set or between EffectiveDate and TerminatedDate. For the current rows, filter the rows that have both flags set off. The advantage is you don't have to create another table for the audit when the trigger is used!
I'd go triggers route, by creating table with similar structure to updated one, with additional columns for tracking changes like ModifiedAt etc. And then adding on update trigger that will insert changes to that table.
I find it easier to maintain than have everything in the application code. Ofcourse many people tend to forget about triggers when it comes to questions like 'wtf this table is changing' ;) Cheers.