Nano second precision over datetime column in sybase - database

We have a audit table which tracks the changes in master table by an insert/update trigger. The trigger will copy all new field values in master table to audit table. We have a datetime column in audit table which tracks the time which insert/update happened on master table (getdate()).
We have unique index over primary key and time column. The problem is if more than an update happens almost at the same time on the master table, then it ends in unique key violation.
Is there any datetime type which captures nanosecond level of precision?

The DB should inherently handle the updates of the same record via ACID. "Cheesing" the audit table with a joint master_table_id / updatetime primary key to prevent "too many updates" in a short period of time is probably not the right approach...especially as performance improves via new hardware...you could have more "legitimate" updates that your pk is preventing.
I hate to ask, but what type of operation are you performing that's updating the same row, many times, at the sub-millisecond level? Are you updating col2, then col3, then col4 all for the same PK via some JDBC or ADO connection?
Can you batch these "many" updates into 1 stored procedure call via inputs to the stored proc, so you limit your write operations? This would be faster, and provide less churn on the audit trail.

Related

Update Stats and Table Count

I have few questions.
Is there any way to get the Table Row Count if we are not maintaining historical data for count.
Below is for Update statistics
Should we run update statistics in our database for all the tables? The database is highly transactional.
How should I calculate the sample size that will suite for all the tables.
There are some tables which gets reindex, this we will ignore. We have a job, which reorganise some tables.
Now the decision need to be taken what table should we update statistics to:
The table which has been reorganised
Or
Table which has its statistics outdated.
To get the number of rows in a table, assuming the table has a primary key field:
SELECT COUNT(PrimaryKeyField) AS NoOfRows FROM table

sql replication error - record exists on Publisher but it's trying to delete/insert that on Subscriber, so Foreign Key/Primary Key violation

We have transactional one-way replication running, and suddenly today started getting this error :
The DELETE statement conflicted with the REFERENCE constraint "FK_BranchDetail_Branch".
The conflict occurred in database "LocationDB", table "dbo.BranchDetail", column 'BranchNumber'.
Message: The row was not found at the Subscriber when applying the replicated command.
Violation of PRIMARY KEY constraint 'PK_Branch'. Cannot insert duplicate key in object
'dbo.Branch'. The duplicate key value is (23456)
Disconnecting from Subscriber 'SQLDB03'
Publisher - SQLDB02.LocationDB
Subscriber - SQLDB03.LocationDB
Tables on both servers:
Branch (BranchNumber PrimaryKey)
BranchDetail (BranchNumber ForeignKey references previous table)
select * from SQLDB02.LocationDB.Branch -- contains : 23456,'Texas',...
select * from SQLDB03.LocationDB.Branch -- contains : 23456,'NULL',...
The problem is - the BranchNumber in question '23456' exists in all 4 tables (Publisher..Branch, Publisher..BranchDetail, Subscriber..Branch, Subscriber..BranchDetail).
Yet, when I ran a trace on Subscriber, I see repeated commands like:
exec [sp_MSdel_dboBranch] 23456 -- which throws FK violation
exec [sp_MSins_dboBranch] 23456,'NULL',... -- which throws PK violation
I'm guessing it's trying to Update the record on subscriber by doing a Delete + Insert. But it's unable to..
Users do not have access to modify Subscriber table. But they can modify Publisher table through UI, and have been doing so for long time without issue. There is also job that updates Publisher table once every night. We started getting this error around noon today.
Our last resort is to reinitialize subscription off-hours.
But any ideas what could have caused it and how to fix it?
For transactional replication, updating primary key column is replicated as delete + insert (deferred update). because this PK column has an FK constraint, the delete will fail at the subscriber. You have a couple workarounds to prevent this from happening moving forward:
Disable replicating of FK constraints, as it is not really needed for one-way replication. Why? Users are not entering data at the subscriber, so there is no need to maintain referential integrity, and transactional Replication replicates log txns, so the order of txns is pretty much guaranteed, no need to worry about one txn showing up before another.
Enable trace flag 8207 on the publisher. If only a single row is updated, then it will be replicated as a single UPDATE statement. If the update affects multiple rows, then it will be replicated as deferred update.
Somehow block users from updating PKs
IMO, best bet is the first option.
How to fix this? Reinit is one way. But if you can manaully disable or drop the FK constraint on the subscriber, that is easiest solution.

SQL Server Repeatedly using of deleted identity PK

Just a general question.
I have a table with IDENTITY PK which is not connected with any table through.
There is another and only FK in the table.
I run DELETE command on that table by some condition.
I can INSERT any new records into the table with auto-inserted next PK IDs.
BUT there is no re-using of ID numbers in PK.
If I run something like
DECLARE #max_PKid BIGINT;
SET #max_PKid = (SELECT ISNULL(MAX(PKid), 0) FROM Table WHERE FKid=#somevalue);
DBCC CHECKIDENT ('Table', reseed, #max_PKid)
right after DELETE, there will be access violation problems on next INSERT
Question 1: Is it good practice in general having intervals in unordered (say, unseeded) PKids in the table after doing DELETE/INSERT without using DBCC CHECKIDENT? Should I care on them?
Question 2: If not, what can I do about?
No you should not worry. There are also other circumstances in which you can get a 'hole' in an IDENTITY range. For example, if you start a transaction, insert 100,000 rows into a table, then rollback that transaction - those IDENTITY values are then gone. This is not something you should be concerned about.

Suggestions on Adding Transaction Auditing In a SQL Server 2008 Database For a Financial System

I'm working on an application that is used to store payment information. We currently have a Transaction Audit table that works thusly:
Anytime a field changes in any table under audit we write an audit row that contains: 1 the table name, the field name, the old value, the new value and the timestamp. One insert takes place per field changed per row being updated.
I've always avoided Triggers in SQL Server since they're hard to document and can make troubleshooting more difficult as well, but is this a good use case for a trigger?
Currently the application determines all audit rows that need to be added on its own and sends hundreds of thousands of audit row INSERT statements to the server at times. This is really slow and not really maintainable for us.
Take a look at Change Data Capture if you are running Enterprise edition. It provides the DML audit trail you're looking for without the overhead associated with triggers or custom user/timestamp logging.
I have worked on financial systems where each table under audit had it's own audit table (e.g. for USERS there was USERS_AUDIT), with the same schema (minus primary key) plus:
A char(1) column to indicate the type of change ('I' = insert, 'U' = update, 'D' = delete)
A datetime column with a default value of GETDATE()
A varchar(255) column indicating the user who made the change (defaulting to USER_ID())
These tables were always inserted into (append-only) by triggers on the table under audit. This will result in fewer inserts for you and better performance, at the cost of having to administer many more audit tables.
I've implemented audit logic in SPROCS before, but same idea applies to doing it in Triggers.
Working Table: (id, field1, field2, field3, ... field-n)
History Table: (userID, Date/time, action (CUD), id, field1, field2, field3, ... field-n)
This also allows for easy querying to see how data historically changed.
Each time a row in a table is changed, a record is created in History table.
Some of our tables are very large - 100+ fields, so 100+ inserts would be too intense a load and also no meaningful way to quickly see what happened to data.

Converting int primary key to bigint in Sql Server

We have a production table with 770 million rows and change. We want(/need?) to change the Primary ID column from int to bigint to allow for future growth (and to avoid the sudden stop when the 32bit integer space is exhausted)
Experiments in DEV have shown that this is not as simple as altering the column as we would need to drop the index and then re-create it. So far in DEV (which is a bit humbler than PROD) the dropping of the index has not finished after 1 and a half hours. This table is hit 24/7 and having it offline for such a long time is not an option.
Has anyone else had to deal with a similar situation? How did you get it done?
Are there alternatives?
Edit: Additional Info:
The Primary key is clustered.
You could attempt a staged approach.
Create a new bigint column
Create an insert trigger to keep new entries in sync with the 2 columns
Execute an update to populate all the empty values in the bigint column with the converted value
Change the primary index on the table from your old id column to the new one
Point any FK's and queries to use the new column
Change the new column to become your identity column and remove the insert trigger from #2
Delete the old ID column
You should end up spreading the pain out over these 7 steps instead of hitting it all at once.
Create a parallel table with the longer data type for new rows and UNION the results?
What I had to do was copy the data into a new table with the desired structure (primary/clustered key only, non-clustered/FK once complete). If you don't have the room, you could bcp out the data and back in. You may need an application outage to make this happen.
What doesn't work: alter table Orderhistory alter column ID bigint because of the primary key. Don't drop the key and alter column as you will just fill your log file and take much longer than copy/bcp.
Never use the SSMS tools designer to change a column property, it copies table into temp table then does a rename once done. Lookup the alter table alter column syntax and use it and possibly defrag once complete if you modified a column wider that sits in middle of table.

Resources