Two identity range constraints on subscriber table - SQL Server replication - sql-server

We have transactional replication with updatable subscriptions. At subscriber have have several tables with double identity constraint. (Not sure how to replicate this. Maybe it happened during one of reinitializations on re-creations of the replication)
For instance:
CHECK NOT FOR REPLICATION (([ID]>(513000) AND [ID]<(514000)))
CHECK NOT FOR REPLICATION (([ID]>(347934) AND [ID]<(360000)))
DBCC CHECKIDENT result:
Checking identity information: current identity value 'NULL', current column value '538185'.
Replication works as intended, but we want to get rid from the excessive constraint. Have no idea why current identity is NULL here. I know that we can reseed the ident so it would become within the range, but how to determine which from 2 constraints is valid and actual for replication?
For some tables it is not an issue and current ident is within one of 2 ranges, but here goes another question: how can we safely remove excessive constraint?
I believe we could remove the article from the replication, verify all constraints are removed from table, then put article back and reinitialize all subscriptions. But reinitializing isn't really good solution for us, because it would take too much time and it may harm our customers.
If we would try just to delete one of constraints - would it do any harm to the replication? Is information about constraints saved in some system tables which can cause troubles in future?
Any ideas about neat solution?

Related

Setting Table Auto Clustering On in snowflake is not clustering the table

I moved from manual clustering to auto clustering around 2 week back.
And the steps i used are below.
Update AUTO_CLUSTERING_ON to yes for the table.
create a middle table and insert the record in the table.
then insert into the main table with order by clustering key from the middle table.
Then i see the clustering is all over the place.
I once did the manual clustering as well and see the cluster doing good.
however on next insert in the main table. clustering again looks trouble some.
Please suggest if I am missing anything.
please note:
The data loaded in middle table is insert from some other table as well. And that table is never clustered. I am not sure if that is the issue.(which i feel it should not be)
You may need to raise a case with Snowflake to enable automatic clustering. Accounts that were created a while ago won't have this enabled. From the documentation:
If manual reclustering is still available in your account, Automatic Clustering may not be enabled yet for your account.
You can request Automatic Clustering to be enabled for your account; however, it will only affect clustered tables that are defined from the time after the feature is enabled.
For clustered tables that were defined before the feature is enabled, you must explicitly “resume” Automatic Clustering for each table. You can use SQL to determine whether Automatic Clustering is enabled for a given table.
Also from the documentation here you should try to run the resume recluster command since the table may have been created prior to automatic clustering being enabled for your account:
alter table t1 resume recluster;
Dont forget that the table gets automatically gets reclustered at Snowflake discretion. Snowflake may simply not think the table requires reclustering based on a number of factors (which I don't know :))
I think raising a case with Snowflake will probably solve this pretty quickly so that may be the best route.
Not specifically related to the question, but I have found that periodically rebuilding a table will achieve the best clustering results, especially for tables which churn frequently. To do this you can specify an ORDER BY clause which mimics your clustering keys.
CREATE OR REPLACE TABLE t1 COPY GRANTS AS
SELECT * FROM t1 ORDER BY a, b, c;

Resetting IDENTITY column after deleting one or two is a good practice in SQL Server

I have table with 100 records in SQL Server Database I deleted 50th record in the table.
is it a good idea to reset identity column for maintaining the sequnce of items?
what is benefit we get by resetiing the order?
The official Microsoft statement is that you should expect gaps when using IDENTITY, so I would infer its absolutely fine if your deletions cause the gap. There is no reason to fill in the gap
https://connect.microsoft.com/SQLServer/feedback/details/739013/failover-or-restart-results-in-reseed-of-identity
Since that is a very long thread on a tangentially related topic, relevant quote below:
As documented in books online for previous versions of SQL Server the
identity property does not guarantee the absence of gaps, this
statement remains true for the above workarounds. These solutions do
help with removing the gaps that occur as part of restarting the
instance in SQL Server 2012.
It is not always a good idea to reset your identity columns, think about other tables who might refer to the table using foreign keys. You would have to re-index those foreign keys as well, which could lead to corrupt or incorrect relationships.
If you are sure other tables do not rely on your identity column, then you can safely reset the column. the benefits of an ordered table does not outweigh the risks in my opinion.
NO!, this is - in general! - not a good practice!
The IDENTITY (and other SEQUENCE-based approaches) are not meant to create contigous line of IDs.
If one is missing - it is missing. If many are missing - just the same. That's it.
If you need a gap-less number, you must create own logic to assure this.

SQL Server update query that only update table itself not indexes

I need to write a query that update only table not indexes
because I want to update an int field and don't need to 10 huge index to be updated
If the int field is included in any of the index definitions then they will have to be updated too.
SQL Server won't allow the base table to have one value and the indexes another for obvious data integrity reasons.
If the int field is not included in any of the index definitions then only the table will be updated anyway.
You can disable the indexes but to re-enable them involves rebuilding the whole index.
It depends on what you really want to do
Keeping the index consistent with the table data is the Consistency in ACID. This is how SQL Server and ACID-compliant RDBMSes work.
There are cases such as bulk loads where you want to delay this Consistency. So if you have this use case, DROP or DISABLE the indexes.
If you disable the indexes:
they will never be used for any query-
all associated unique and foreign keys etc will be disabled too
they are not maintained
If you DROP them, of course they can't be used either.
After your bulk load is finished, you enable or create the indexes/constraints again.
If this is what you really want then read MSDN:
Disabling Indexes
Guidelines for Disabling Indexes and Constraints
Perhaps Filtered Indexes are what you're looking for.
This is a SQL Server 2008 feature that lets you create an index that only applies to certain values in a column.

Sql Server: What is the benefit of using "Enforce foreign key constraint" when it's set to "NO"?

I know the purpose of "Enforce foreign key constraint" in RDBMS. But is there any benefit when it's set to "NO" ?
In normal production, this setting should never be set to NO.
But: when you're developing, or restructuring a database, or when you do e.g. a large bulk load of data that you'll need to "sanitize" (clean up), then it can make sense to turn off foreign key constraints to allow "non-valid" data to be loaded into a table. Of course, as I said - you shouldn't keep that setting turned off for a long period of time - you should then proceed to clean up the data, either delete those rows that are in violation of the FK constraint, or update their values so they match a parent row.
So again: in "normal" production mode, this setting should never be NO - but for specific tasks, it might help get the job done more easily. Use it with caution, and always turn the FK constraints back on as soon as you can!
Not in everyday usage, as far as I know. The times I've de-enforced foreign keys for a while are when there are problems with data and fixing them is hidered by relationship checks.
During bulk operations constraint checks are temporarily ignored in order to increase performance.
It can also be useful in a data warehouse staging environment where you don't want to enforce the constraints before you have processed / cleansed the data, but you still want to be able to use the FK relationships to understand how the tables link to each other. These relationships can be picked up and displayed in third party tools (e.g. Visio, ERWIN, etc), so it can be useful metadata even though not strictly enforced.

Table in DB for generating primary keys?

Do you ever use a separate table for "generating" artificial primary keys for DB (and why)? What I mean is to have a table with two columns, table name and current ID - with which you could get new "ID" for some table by simply locking the row with that table name, getting the current value of the key, increment it by one, and unlock the row. Why would you prefer this over standard integer identity column?
P.S. The "idea" is from Fowlers Patterns of Enterprise Application Architecture, btw...
This is called Hi/Lo assignment.
You would do this having either a trigger on INSERT on your tables getting the ID from this table and incrementing it before or after you get your ID, depending of your choice.
This is commonly used when you have to deal with multiple database engines. The autoincremental identifier in Oracle is through a SEQUENCE, which you increment with SEQUENCE.NEXTVALUE from within a BEFORE INSERT TRIGGER on your data table.
Oppositly, SQL Server has IDENTITY columns, autoincrementing natively and this is managed by the DBE itself.
In order for your software to work on both DBE, you have to come to some sort of a standard, then the most common "standard" used for this is the Hi/Lo assignment to the primary key.
This is one approach amongst others. These days, with ORM Mapping tools such as NHibernate, it is offered through configuration so that you need less to care on both the application and the database sides.
EDIT #1
Because this kind of maneuvre can't be used for a global scope, you'd have to have such a table per database, or database schema. This way, each schema is indenpendant from the other. However, data in one schema can't implicitly be moved toward another with the same key, as it would perhaps be conflicted with an already existing row.
As for a security schema, it accesses the same database as another schema or user, so no additional table should exist for specific security schema.
Whenever you can use sql server's identity or guid features, you should. However, there are a few situations where this may not be possible.
One example is that sql server only allows one identity column per table. Rarely, a table will have records that need both a private id and a public id, and a limit of one identity column means generating both as integers can be a pain. You could always use a guid for one, but you want the integer on the private id for speed and you may also want the public id to be more human readable than a guid.
In this situation, an extra table for generating the ids can make sense. However, I'd do it a bit differently. Still have two columns in the table, but make one "shadow" or "Id mapping" table for every real table. One of the columns will be your private id (unique constraint) and one will be your public id (identity with maybe an increment value of '7' or '13' or other number that's less obvious than '1').
The key difference here is that you don't want to do the locking yourself. Let sql server handle it.
The only time I have ever used this is when I had an application in BTrieve, and it didn't have an identity column. And I should also say when they tried to use this table, it caused a massive slow down when they tried to import data, because of all the extra reads and writes. My friend looked at it and rewrote how they did it to speed it up, but the moral of the story is that if you do something like this incorrectly, there can be brutal consequences.
Personally, I don't think I would ever want to do this. There is too much possibility for error. Two people try and use the same key, because they forgot to lock the table before grabbing the id. This just seems like something that should be left up to the RDBMS if at all possible. As Will brought up, it's easy to minimize this situation, but if you don't know what you are doing it can happen.
You wouldn't prefer it at all.
Whatever you gain by using the pattern or becoming DB agnostic, you'll lose in headaches, support and performance.
locking the row with that table name,
getting the current value of the key,
increment it by one, and unlock the
row
This sounds simple, doesn't it?
UPDATE TableOfId
SET Id += 1
OUTPUT Inserted.Id
WHERE Name = #Name;
In reality, its a disaster. No activity occurs in the application as a standalone operation: all operations are part of transactions. One cannot simply 'unlock' the row because the 'unlock' will actually occur only at commit time. Which means that all transactions that need an Id on a table are serialized and only one can proceed at any time. It also means that transaction that access more than one table will likely deadlock on updating the table of Ids because enforcing the 'get the next Id' update order is hard in practice.
To avoid complete serialization one needs to obtain the Ids on separate, standalone, transactions that can commit immediately (usually implicit auto-commit transaction on the UPDATE itself). But this complicates the application logic tremendously. Every operation needs to maintain two separate connections to the database, one to do the normal transaction logic and another one to obtain the needed Ids. Even then, the update of Ids can become such a hot spot that it can still cause visible contention and blocking (similar to the dreaded 'update page hit count +1' prevalent on web apps).
In short: use IDENTITY. The identity generation is optimized for high concurrency.
I have seen this pattern used when data created in one database needs to be migrated, backed-up, clustered or staged to another database. In this situation, first of all your want to ensure the primary keys will not need to change. Secondly the foreign keys. Thirdly, externally exposed keys or durable references.

Resources