I have SQL Server 2014 restarted unexpectedly and that broke straight auto-increment identity sequences on entities. All new entities inserted to tables have their identities incremented by 10 000.
Let's say, if there were entities with IDs "1, 2, 3" now all newly inserted entities are like "10004, 10005".
Here is real data:
..., 12379, 12380, 12381, (after the restart) 22350, 22351, 22352, 22353, 22354, 22355
(Extra question here is why has it inserted the very first entity after the restart with 22350? I thought it should have been 22382 as it's the latest ID by that moment 12381 + 10001 = 22382)
I searched and found out the reasons for what happened. Now I want to prevent such situations in the future and fix the current jump. It's a production server and users continuously add new stuff to the DB.
QUESTION 1
What options do I have here?
My thoughts on how to prevent it are:
Use sequences instead of identity columns
Disable T272 flag, reseed identity causing it started from the latest right value (I guess there is such an option)
What are the drawbacks of the two above? Please advise some new ways if there are.
QUESTION 2
I'm not an expert in SQL Server. And now I need to normalize and adjust the numeration of entities since it's a business requirement. I think I need to write a script that updates wrong ID values setting them to be right. Is it dangerous to update identity values? Some tables have dependent records. What does this script may look like?
OTHER INFO
Here is how my identity columns declared (got this using "Generate scripts" option in SSMS):
CREATE TABLE [dbo].[Tasks]
(
[Id] [uniqueidentifier] NOT NULL,
[Created] [datetime] NOT NULL,
...
[TaskNo] [bigint] IDENTITY(1,1) NOT NULL
CONSTRAINT [PK_dbo.Tasks]
PRIMARY KEY CLUSTERED ([Id] ASC)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
I also use Entity Framework 6 for database manipulating.
I will be happy to provide any other information by request if needed.
I did this once in a weekend of downtime and ended up having to reseed the whole table, by turning off the identity insert and then updating each row with a row numbers. This was based on the tables correct sort order to make sure the sequence was correct.
As it updated the whole table, (500 million rows) it generated a hell of a lot of transaction log data. Make sure you have enough space for this and presize the log if required.
As said above though, if you must rely on the identity column then amend it to a sequence. Also, make sure your rollback mechanism is good if there is an error during insert and the sequence has all ready been incremented.
Related
I have a weird issue. In the MSSQL database I have multiple tables in that database but only two of them does this:
I insert a new row
get row count (which is increased by 1)
get row count again within seconds (this time it's decreased) and the new row is not in the table anymore
The query i use to insert row and get count:
INSERT INTO [dbo].[CSMobileMessages]
([MessageSID],[IssueID],[UserSent])
VALUES
('213',0,'blabla')
SELECT count([IDx])
FROM [dbo].[CSMobileMessages]
The SQL query returns "1 row affected" and i even get back the new row ID as well from the identity column. No errors at all. I checked in profiler which states 1 row inserted successfully and nothing else happened.
The table has no triggers. Index only on identity field (IDx), user used is "sa" with full access. Tried with different user but same happens.
The table is called "CSMobileMessages" so I created a new table:
CREATE TABLE [dbo].[CSMobileMessages2](
[IDx] [int] IDENTITY(1,1) NOT NULL,
[MessageSID] [varchar](50) NULL,
[IssueID] [int] NOT NULL,
[UserSent] [varchar](50) NULL,
CONSTRAINT [PK_CSMobileMessages2] PRIMARY KEY CLUSTERED
(
[IDx] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[CSMobileMessages2] ADD CONSTRAINT [DF_CSMobileMessages2_IssueID] DEFAULT ((0)) FOR [IssueID]
GO
I insert 1000 rows into the new table and it worked. So i delete the old table (CSMobileMessages) and rename the new table from CSMobileMessages2 to CSMobileMessages.
As soon as i do that, the inserted rows gets deleted and i will get the exact same row count for the new table what i had with the old one. Also i can't insert rows anymore. No services or any other software touches this table. However if i restart the server i can insert 1 new row and after it starts happening again.
Edit:
I uses MSSMS and connect to the database remotely but i tried locally on the server as well and same happens. A service used this table but i disabled it when this started few days ago. Before that the service ran happily for 1 year with no issue. I double checked to make sure, the service is turned off and no one connects to that table but me.
Has anyone ever seen this issue before and knows what causes it?
I gave up and the whole database was reset from few days old back up as a last try and it's working now as it supposed to. I don't set this question as answered because even it's fixed the problem I still have no idea what happened exactly as within my 20+ yrs coding i never seen anything like this before.
Thanks for everyone who tried to help with ideas!
hi would like to ask about how to partition the following table (see below). The problem i'm having is not in the retrieval of History records which was resolved by the clustered Index. But as you can see the index is based on the HistoryParameterID then TimeStamp, this is needed because the retrieval of rows are based on the columns stated above.
The problem here is that whenever it reaches ~1 billion records, inserts are slowing down since the scenario is there will be 15k rows\second (note this can be 30k - 100k) to be inserted and per row it corresponds to a HistoryParameterID.
Basically, the HistoryParameterID is not unique , it has a one -> many relation ship with the other columns of the table below.
My hunch is that because of the index, it slows down the inserts because inserts are not always at the bottom because it is arranged by HistoryParameterID.
I did some testing using Timestamp as index but to no avail since query performance is unacceptable.
is there any way to partition this by history ParameterID? I was trying it so i created 15k Tables for partition view. But when i created the view it didn't finish executing. Any tips? or is there any way to partition ? Please note that i'm using Standard edition and using enterprise edition is not an option.
CREATE TABLE [dbo].[HistorySampleValues]
(
[HistoryParameterID] [int] NOT NULL,
[SourceTimeStamp] [datetime2](7) NOT NULL,
[ArchiveTimestamp] [datetime2](7) NOT NULL CONSTRAINT [DF__HistorySa__Archi__2A164134] DEFAULT (getutcdate()),
[ValueStatus] [int] NOT NULL,
[ArchiveStatus] [int] NOT NULL,
[IntegerValue] [bigint] SPARSE NULL,
[DoubleValue] [float] SPARSE NULL,
[StringValue] [varchar](100) SPARSE NULL,
[EnumNamedSetName] [varchar](100) SPARSE NULL,
[EnumNumericValue] [int] SPARSE NULL,
[EnumTextualValue] [varchar](256) SPARSE NULL
) ON [PRIMARY]
CREATE CLUSTERED INDEX [Source_HistParameterID_Index] ON [dbo].[HistorySampleValues]
(
[HistoryParameterID] ASC,
[SourceTimeStamp] ASC
) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
GO
I was trying it so i created 15k Tables for partition view. But when
i created the view it didn't finish executing. Any tips? or is there
any way to partition ? Please note that i'm using Standard edition and
using enterprise edition is not an option.
If you go down the partitioned view path (http://technet.microsoft.com/en-us/library/ms190019.aspx), I suggest fewer tables (under one hundred). Without partitioned tables, the optimizer must go through a lot of work since each table of the view could be indexed differently.
I would not expect inserts to slow down with table size if HistoryParameterID is incremental. However, in the case of a random value, inserts will become progressively slower as the table size grows due to lower buffer cache efficiency. That problem will exist with a single table, partitioned table, or partitioned view. See http://www.dbdelta.com/improving-uniqueidentifier-performance/ for an example using a guid but the issue applies to any random key value.
You might try a single table with SourceTimestamp alone as the clustered index key and a non-clustered index on HistoryID nad SourceTimestamp. That would provide the best insert performance and the non-clustered index (maybe with included columns) might be good enough for your select queries.
Everything you need is here. I'll hope you can figure it out.
http://msdn.microsoft.com/en-us/library/ms188730.aspx
and for Standard Edition alternative solutions exist like this answer.
and this is an interesting article too.
also we implement that in our enterprise automation application with custom indexing around table of users and it worked well.
Here's the cons and pros of custom implementation:
Pros:
Higher performance that partitioned table because of application's logic awareness.
Cons:
Implementing routing method and updating indexes.
Un-Centralized data.
I have a table (not designed by me) that has a smallint auto incrementing primary key, i.e. IDENTITY(1,1):
CREATE TABLE [table1](
[id] [smallint] IDENTITY(1,1) NOT NULL,
...
CONSTRAINT [id_pk] PRIMARY KEY CLUSTERED ([id] ASC)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
I'm trying to figure out what happens when the primary key value goes out of range, but there are gaps in the table that can be filled. For example, suppose the table has 32767 rows (max smallint value), with ids spanning from 1 to 32767. Now suppose I delete half of the rows and try to insert a new row - will an out-of-range error occur, or will ids somehow be recalculated and allow a new record insertion (given that 32767+1 cannot be used as a new id value)?
From what I've found, gap values are (for good reasons) not supposed to be re-used. But does that mean (in an extreme case) that if there is only one row in the table with id=32767 (e.g. other rows have been deleted), trying to insert another row will cause problems due to out of range PK value? Or will SQL Server handle such situations (and assign some other id value to the new row) - meaning that 32767 can be seen merely as the maximum number of rows in the table? How can such situations be handled when they occur, and changing the table design (i.e. from smallint to bigint or guid as a PK) is not possible?
From the documentation:
Reuse of values – For a given identity property with specific
seed/increment, the identity values are not reused by the engine. If a
particular insert statement fails or if the insert statement is rolled
back then the consumed identity values are lost and will not be
generated again. This can result in gaps when the subsequent identity
values are generated.
So the gaps will remain, and the insert will fail.
For proof, run the following code:
DROP TABLE table1
CREATE TABLE [table1](
[id] [smallint] IDENTITY(1,1) NOT NULL,
[somecolumn] varchar(10),
CONSTRAINT [id_pk] PRIMARY KEY CLUSTERED ([id] ASC)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
SET IDENTITY_INSERT table1 ON
INSERT INTO table1 (id, [somecolumn]) VALUES (32766, 'a')
SET IDENTITY_INSERT table1 OFF
INSERT INTO table1 ([somecolumn]) VALUES ('b')
-- Following line will print error:
--Msg 8115, Level 16, State 1, Line 16
--Arithmetic overflow error converting IDENTITY to data type smallint.
INSERT INTO table1 ([somecolumn]) VALUES ('c')
IDENTITY values produce an error if they go out of range. The only remedy in this case is to either reseed the value (DBCC CHECKIDENT('MyTable', RESEED, 0)) or expand the datatype to a bigger one.
Why does SQL Server not reuse gaps and why do gaps exist in the first place? That has performance reasons. Internally an IDENTITY is a variable in memory that can be updated atomically. That's important because multiple threads may insert into a given table with an IDENTITY.
Imagine thread one inserts into table 1, and it is given id = 10. The threads transaction continues with other work. Now thread two also inserts into the same table concurrently (that is possible unless you use SERIALIZABLE) and is given id = 11. Thread one now runs into an error and its transaction is rolled back, freeing id 10. You cannot change thread two's id of 11 retroactively, so you have a gap.
If you want to fill that gap, you have to know that it's there, but SQL Server simply does not store that information. Even if it would, you'd have all kinds of locking issues arising which are neatly avoided with the single atomic value that is used as the counter.
In SQL Server 2012 you can even use such atomical counters in your own code directly through the SEQUENCE keyword.
This depends on the Storage Engine you use.
MyISAM would handle this request differend to InnoDB for example.
Mostly you should get an SQL error when trying to write into your DB and there is no Primary Key value left.
Refering to : http://dev.mysql.com/doc/refman/5.1/en/integer-types.html
smallint will be 65535 when unsigned
I generate change scripts for my database to keep it under source control, and a strange thing happens every time:
I have a FlowFolder table, and clicking Generate Scripts creates the following two scripts:
dbo.FlowFolder.Table.sql:
USE [NC]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[FlowFolder](
[Id] [int] IDENTITY(1,1) NOT NULL,
[ParentId] [dbo].[ParentId] NULL,
[ParentType] [dbo].[CLRTypeName] NOT NULL,
[Name] [dbo].[EntityName] NOT NULL,
[Description] [dbo].[EntityDescription] NULL,
[LastChanged] [int] NULL,
CONSTRAINT [PK_FlowFolder] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
DF\_FlowFolder_LastChanged.Default.sql:
USE [NC]
GO
ALTER TABLE [dbo].[FlowFolder] ADD CONSTRAINT [DF_FlowFolder_LastChanged]
DEFAULT ((0)) FOR [LastChanged]
GO
Question
Why does SQL Server Express produce two files?
Why doesn't it place this constraint as a DEFAULT(0) attribute on the LastChanged field in the CREATE TABLE statement?
How can I force SQL Server to generate a consolidated script for each change instead of splitting them up?
EDIT:
How we generate scripts. At first, it was a single file. But, unfortunately, SQLEXPRESS does not keep the order of the database entities from save to save. Meaning, that even a small change in the schema could result in a script widely different from the predecessor. This is very inconvenient if one wishes to compare the differences in schemas. Hence we adopted another approach. We generate script per database entity (not data, but schema entity, like table, user type, etc ...) and then apply a small utility that removes the comment inserted by SQLEXPRESS in each file stating the date of generation. After that it is clearly visible which schema entities have changed from revision to revision.
In conclusion, we must generate script per schema entity.
About the DEFAULT(0) constraints - we really do not need them to be named constraints, so placing them on the column definition is fine.
Do you have "File per object" selected on the output option panel of the wizard?
Because you can't give constraints names when they're embedded in CREATE TABLE
Make sure "Single file" is selected on the output option panel -- or try "Script to New Query Window"
Unfortunately, the feature you're looking for doesn't exist.
All constraints are given names -- even unnamed constraints are given names. SQL Server doesn't keep track of which names it created and which ones you created, so in the script generation process, it has to split them off.
Usually, it's better to manage this process in reverse. In other words, have a collection of script files that you combine together to create the DB. That way, you can have the script files structured however you want. Team System Data Edition does this all automatically for you. A regular DB project lets you keep them separate, but it's more work on the deployment side.
I’m running into an odd problem, and I need some help trying to figure it out.
I have a database which has an ID column (defined as int not null, Identity, starts at 1, increments by 1) in addition to all the application data columns. The primary key for the table is the ID column, no other components.
There is no set of data I can use as a "natural primary key" since the application has to allow for multiple submissions of the same data.
I have a stored procedure, which is the only way to add new records into the table (other than logging into the server directly as the db owner)
While QA was testing the application this morning, they to enter a new record into the database (using the application as it was intended, and as they have been doing for the last two weeks) and encountered a primary key violation on this table.
This is the same way I've been doing Primary Keys for about 10 years now, and have never run across this.
Any ideas on how to fix this? Or is this one of those cosmic ray glitches that shows up once in a long while.
Thanks for any advice you can give.
Nigel
Edited at 1:15PM EDT June 12th, to give more information
A simplified version of the schema...
CREATE TABLE [dbo].[tbl_Queries](
[QueryID] [int] IDENTITY(1,1) NOT NULL,
[FirstName] [varchar](50) NOT NULL,
[LastName] [varchar](50) NOT NULL,
[Address] [varchar](150) NOT NULL,
[Apt#] [varchar](10) NOT NULL
... <12 other columns deleted for brevity>
[VersionCode] [timestamp] NOT NULL,
CONSTRAINT [PK_tbl_Queries] PRIMARY KEY CLUSTERED
(
[QueryID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
(also removed the default value statements)
The stored procedure is as follows
insert into dbo.tbl_Queries
( FirstName,
LastName,
[Address],
[Apt#]...) values
( #firstName,
#lastName,
#address,
isnull(#apt, ''), ... )
It doesn't even look at the identity column, doesn't use IDENTITY, ##scope_identity or anything similar, it's just a file and forget.
I am as confident as I can be that the identity value wasn't reset, and that no-one else is using direct database access to enter values. The only time in this project that identity insert is used is in the initial database deployment to setup specific values in lookup tables.
The QA team tried again right after getting the error, and was able to submit a query successfully, and they have been trying since then to reproduce it, and haven't succeeded so far.
I really do appreciate the ideas folks.
Sounds like the identity seed got corrupted or reset somehow. Easiest solution will be to reset the seed to the current max value of the identity column:
DECLARE #nextid INT;
SET #nextid = (SELECT MAX([columnname]) FROM [tablename]);
DBCC CHECKIDENT ([tablename], RESEED, #nextid);
While I don't have an explanation as to a potential cause, it is certinaly possible to change the seed value of an identity column. If the seed were lowered to where the next value would already exist in the table, then that could certainly cause what you're seeing. Try running DBCC CHECKIDENT (table_name) and see what it gives you.
For more information, check out this page
Random thought based on experience
Have you synched data with, say, Red Gate Data Compare. This has an option to reseed identity columns. It's caused issues for use. And another project last month.
You may also have explicitly loaded/synched IDs too.
Maybe someone insert some records logging into the server directly using a new ID explicity, then when the identity auto increment field reach this number a primary key violation happened.
But The cosmic ray is algo a good explanation ;)
Just to make very, very sure...you aren't using an IDENTITY_INSERT in your stored procedure are you? Some logic like this:
declare #id int;
Set #id=Select Max(IDColumn) From Sometable;
SET IDENTITY_INSERT dbo.SomeTable ON
Insert (IDColumn, ...others...) Values (#id+1, ...others...);
SET IDENTITY_INSERT dbo.SomeTable OFF
.
.
.
I feel sticky just typing it. But every once in awhile you run across folks that just never quite understood what an Identity column is all about and I want to make sure that this is ruled out. By the way: if this is the answer, I won't hold it against you if just delete the question and never admit that this was your problem!
Can you tell that I hire interns every summer?
Are you using functions like ##identity or scope_identity() in any of your procedures? if your table has triggers or multiple inserts you could be getting back the wrong identity value for the table you want
Hopefully that is not the case, but there is a known bug in SQL 2005 with SCOPE_IDENTITY():
http://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=328811
The Primary Key violation is not necessarily coming from that table.
Does the application touch any other tables or call any other stored procedures for that function? Are there any triggers on the table? Or does the stored procedure itself use any other tables or stored procedures?
In particular, an Auditing table or trigger could cause this.