MS SQL removes newly inserted row automatically straight after insert - sql-server

I have a weird issue. In the MSSQL database I have multiple tables in that database but only two of them does this:
I insert a new row
get row count (which is increased by 1)
get row count again within seconds (this time it's decreased) and the new row is not in the table anymore
The query i use to insert row and get count:
INSERT INTO [dbo].[CSMobileMessages]
([MessageSID],[IssueID],[UserSent])
VALUES
('213',0,'blabla')
SELECT count([IDx])
FROM [dbo].[CSMobileMessages]
The SQL query returns "1 row affected" and i even get back the new row ID as well from the identity column. No errors at all. I checked in profiler which states 1 row inserted successfully and nothing else happened.
The table has no triggers. Index only on identity field (IDx), user used is "sa" with full access. Tried with different user but same happens.
The table is called "CSMobileMessages" so I created a new table:
CREATE TABLE [dbo].[CSMobileMessages2](
[IDx] [int] IDENTITY(1,1) NOT NULL,
[MessageSID] [varchar](50) NULL,
[IssueID] [int] NOT NULL,
[UserSent] [varchar](50) NULL,
CONSTRAINT [PK_CSMobileMessages2] PRIMARY KEY CLUSTERED
(
[IDx] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[CSMobileMessages2] ADD CONSTRAINT [DF_CSMobileMessages2_IssueID] DEFAULT ((0)) FOR [IssueID]
GO
I insert 1000 rows into the new table and it worked. So i delete the old table (CSMobileMessages) and rename the new table from CSMobileMessages2 to CSMobileMessages.
As soon as i do that, the inserted rows gets deleted and i will get the exact same row count for the new table what i had with the old one. Also i can't insert rows anymore. No services or any other software touches this table. However if i restart the server i can insert 1 new row and after it starts happening again.
Edit:
I uses MSSMS and connect to the database remotely but i tried locally on the server as well and same happens. A service used this table but i disabled it when this started few days ago. Before that the service ran happily for 1 year with no issue. I double checked to make sure, the service is turned off and no one connects to that table but me.
Has anyone ever seen this issue before and knows what causes it?

I gave up and the whole database was reset from few days old back up as a last try and it's working now as it supposed to. I don't set this question as answered because even it's fixed the problem I still have no idea what happened exactly as within my 20+ yrs coding i never seen anything like this before.
Thanks for everyone who tried to help with ideas!

Related

Fix SQL Server identity and restore the proper numeration order

I have SQL Server 2014 restarted unexpectedly and that broke straight auto-increment identity sequences on entities. All new entities inserted to tables have their identities incremented by 10 000.
Let's say, if there were entities with IDs "1, 2, 3" now all newly inserted entities are like "10004, 10005".
Here is real data:
..., 12379, 12380, 12381, (after the restart) 22350, 22351, 22352, 22353, 22354, 22355
(Extra question here is why has it inserted the very first entity after the restart with 22350? I thought it should have been 22382 as it's the latest ID by that moment 12381 + 10001 = 22382)
I searched and found out the reasons for what happened. Now I want to prevent such situations in the future and fix the current jump. It's a production server and users continuously add new stuff to the DB.
QUESTION 1
What options do I have here?
My thoughts on how to prevent it are:
Use sequences instead of identity columns
Disable T272 flag, reseed identity causing it started from the latest right value (I guess there is such an option)
What are the drawbacks of the two above? Please advise some new ways if there are.
QUESTION 2
I'm not an expert in SQL Server. And now I need to normalize and adjust the numeration of entities since it's a business requirement. I think I need to write a script that updates wrong ID values setting them to be right. Is it dangerous to update identity values? Some tables have dependent records. What does this script may look like?
OTHER INFO
Here is how my identity columns declared (got this using "Generate scripts" option in SSMS):
CREATE TABLE [dbo].[Tasks]
(
[Id] [uniqueidentifier] NOT NULL,
[Created] [datetime] NOT NULL,
...
[TaskNo] [bigint] IDENTITY(1,1) NOT NULL
CONSTRAINT [PK_dbo.Tasks]
PRIMARY KEY CLUSTERED ([Id] ASC)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
I also use Entity Framework 6 for database manipulating.
I will be happy to provide any other information by request if needed.
I did this once in a weekend of downtime and ended up having to reseed the whole table, by turning off the identity insert and then updating each row with a row numbers. This was based on the tables correct sort order to make sure the sequence was correct.
As it updated the whole table, (500 million rows) it generated a hell of a lot of transaction log data. Make sure you have enough space for this and presize the log if required.
As said above though, if you must rely on the identity column then amend it to a sequence. Also, make sure your rollback mechanism is good if there is an error during insert and the sequence has all ready been incremented.

SQL Server: NOT EXISTS clause isn't stopping duplicates when the SQL executed parallelly

I have three database tables like this:
book(book_id INT IDENTITY(1,1) PK, book_name VARCHAR(255), book_code INT UNIQUE)
series(series_id INT IDENTITY(1,1) PK, series_name VARCHAR(255), series_code INT UNIQUE)
bookseries(bookseries_id INT IDENTITY(1,1) PK, book_id INT FK, series_id INT FK) -- The combination (book_id + series_id) should be unique.
I have a functionality where the user can upload a spreadsheet with book_id and series_id populated (with around 50K records in the spreadsheet).
When the spreadsheet is uploaded, I need to insert a record into bookseries table if the combination of book_id and series_id does not already exists in the bookseries table.
So, I am doing something like this (Pseudocode):
Dim sqlList As New List(Of String)
Dim sql As String = String.Empty
For each row in spreadsheetRows
sql = String.Format("INSERT INTO bookseries(book_id, series_id) SELECT {0},{1} WHERE NOT EXISTS (SELECT 1 FROM bookseries WHERE book_id={0} AND series_id={1})", row.book_id, row.series_id)
sqlList.Add(sql)
If sqlList.Count MOD 500 = 0 Then insertListIntoDB(sqlList)
Next
If sqlList.Count > 0 Then insertListIntoDB(sqlList)
This is working correctly (inserting a record if it doesn't already exist) when one user uploads a spreadsheet.
However, duplicate records being inserted into the bookseries table (duplicate book_id + series_id) when two users upload the spreadsheet and if the same records populated within the spreadsheet.
I couldn't understand why/how the duplicates are being inserted as I'm expecting the WHERE NOT EXISTS clause to stop the duplicate insertions.
Example: INSERT INTO bookseries(book_id, series_id) SELECT 100, 1000 WHERE NOT EXISTS (SELECT 1 FROM bookseries WHERE book_id=100 AND series_id=1000)
Could anyone advise why this isn't working as I'd expect or suggest if there is a workaround?
Thank you in advance.
PS: I am aware of the parameterized SQL usage, SQL Injection, Dictionary, and the drawbacks of executing the raw SQL directly on the server etc, so please do not question why I'm not using them in this instance. The above example is just to keep things simple and explain what I'm trying to achieve. My question is purely related to why the NOT EXISTS clause isn't stopping the duplicate insertions in my code.
The simplest solution is to put a unique constraint on book_id, series_id since they form the natural composite key of the link table. Then you just need to handle the unique constraint error (number 2601 or 2627) when you do an insert and continue processing.
it's not obvious to me why your current code isn't working as expected. Are two users trying to upload duplicate records at the same time? If so, my guess is that the transaction scope is wrong and you should commit after each insert instead of after all the records are processed.
Perhaps your WHERE clause SELECT SQL is returning Null?
How about:
... WHERE ((SELECT Count(*) FROM bookseries WHERE book_id=100 AND series_id=1000) = 0)
Depending on your requirements, and piggybacking off of Jamie, you may consider adding a unique index on the two columns mentioned with the addition of ignoring duplicates as a potential work around. I don't have enough information about your application to know if this is a good suggestion, but it is an alternative.
In this example, the significant piece is IGNORE_DUP_KEY = ON. This lets you try to insert duplicate rows, but SQL Server will silently ignore them. This could have the added benefit of removing your WHERE NOT EXISTS check before inserting.
CREATE UNIQUE CLUSTERED INDEX [UCX_bookseries] ON dbo.bookseries
(
book_id ASC,
series_id ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = ON, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO

Codeigniter won't connect to one table only

Running PHP 5.3 and SQL Server 2008 R2 using sqlsrv driver to connect.
Codeigniter 2.2
So I have been using codeigniter for a couple years now and this is the first time I have run across this problem. I have a table in my database named 'update_times' which is a log table for data updates I load daily, it has around 2000 records in it and is indexed on the columns I query against. My database has 60 or so tables and 'update_times' is the only table I am unable to select anything from with codeigniter.
I have done a bunch of tests:
I ran a record count for every table in the database and every other table was correct except 'update_times' table which returned 0 records
I can query(select) from the table in Management Studio with no problem.
I can also select from update_times table using sqlsrv php function sqlsrv_query, I return records using this method
I am unable to select using active_record select or query method from codeigniter (I tried it in multiple controllers/models)
Here's the weird part, I can insert, update and delete using the active record functions. It is only the select where I am having the issue and only on this table.
I tried rebuilding the indexes and rebuilding the entire table as well but nothing helped. So I am left stumped. I was going to just create a new table with a new name for update_times but I really want to find the problem in CI so that I know what to do if it happens again. It's almost like CI is blocking the select for some reason.
I now have created a table with a different name with that same structure and I am unable to query it the same as the update_times table. I am still stumped.
Here is the table structure of update_times:
CREATE TABLE [dbo].[update_times](
[ut_id] [int] IDENTITY(1,1) NOT NULL,
[table_name] [varchar](50) NOT NULL,
[start_time] [datetime] NOT NULL,
[end_time] [datetime] NOT NULL,
[records] [int] NOT NULL,
[emp_id] [int] NULL,
[dates_requested] [varchar](50) NULL,
PRIMARY KEY CLUSTERED
(
[ut_id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
Any help would be great or suggestions on how to narrow the error.

DTS Package Terminates because of Duplicate Key Row

We have an old DTS Package that our SQL 2000 Server uses to push Employee records out to machines on our manufacturing floor.
Recently, we upgraded one of the machines, and it now is running SQL 2008 Express.
We have reconfigured the DTS Package to push the Employee records out to this new Server, but now we are getting this error message:
FETCH_EMPLOYEES:
The statement has been terminated. Cannot insert duplicate key row in object 'dbo.Users' with unique index 'IX_tblUsers_OpID'.
If I remote into our SQL 2000 Server, I can Right-Click to execute each step of the DTS Package in succession with NO errors.
So, I log onto this machine's SQL 2008 Express instance to see if I can figure anything out.
Now I am looking at the FETCH_EMPLOYEES stored procedure:
PROCEDURE [dbo].[FETCH_EMPLOYEES] AS
DECLARE #OpID varchar(255)
DECLARE #Password varchar(50)
DECLARE Employee_Cursor CURSOR FOR
SELECT OpID, Password
FROM dbo.vw_Employees
OPEN Employee_Cursor
FETCH NEXT FROM Employee_Cursor
INTO #OpID,#Password
WHILE ##FETCH_STATUS = 0
BEGIN
insert into dbo.Users (OpID,Password,GroupID)
VALUES (#OpID,#Password,'GROUP01')
FETCH NEXT FROM Employee_Cursor
INTO #OpID,#Password
END
CLOSE Employee_Cursor
DEALLOCATE Employee_Cursor
I don't really understand Cursors, but I can tell that the data is being pulled from a view called vw_Employees and being inserted into the table dbo.Users.
The view vw_Employees is simple:
SELECT DISTINCT FirstName + ' ' + LastName AS OpID, Num AS Password
FROM dbo.EmployeeInfo
WHERE (Num IS NOT NULL) AND (FirstName IS NOT NULL)
AND (LastName IS NOT NULL) AND (Train IS NULL OR Train <> 'EX')
So, now it seems the problem must be from the table dbo.Users.
I did not see anything particularly attention getting with this, so I scripted this table using a CREATE TO Query Editor and got this information that I don't really understand:
CREATE TABLE [dbo].[Users](
[ID] [int] IDENTITY(1,1) NOT NULL,
[OpID] [nvarchar](255) NOT NULL,
[Password] [nvarchar](50) NOT NULL,
[GroupID] [nvarchar](10) NOT NULL,
[IsLocked] [bit] NOT NULL,
CONSTRAINT [PK_tblUsers] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Users] WITH CHECK ADD CONSTRAINT [FK_tblUsers_tblGroups] FOREIGN KEY([GroupID])
REFERENCES [dbo].[Groups] ([GroupID])
GO
ALTER TABLE [dbo].[Users] CHECK CONSTRAINT [FK_tblUsers_tblGroups]
GO
ALTER TABLE [dbo].[Users] ADD CONSTRAINT [DF_tblUsers_IsLocked] DEFAULT ((0)) FOR [IsLocked]
GO
OK, I feel the problem is somewhere in this table definition, but I don't really understand what it is doing (after creating the basic table).
It has a CONSTRAINT section with lots of variables I do not understand, then it is altering these tables to add FOREIGN KEY and CONSTRAINTS.
My Question: Could someone help me understand what the error is telling me (other than there is some duplicate key violation).
What column could be throwing a duplicate key violation?
Did I include enough data and screenshots?
UPDATE:
Based on Comments, it sounds like this screenshot is needed.
In the Users table, there is a list of Indexes, and one called IX_tblUsers_OpID says it is Unique and Non-Clustered.
I think we have eliminated duplicate Op_ID values on our source data table EmployeeInfo by finding all of them with this script:
select num as 'Op_ID', count(num) as 'Occurrences'
from employeeInfo
group by num
having 1<count(num);
This should have gotten rid of all of my duplicates. Right?
We purchase manufacturing machines that come configured with PCs for storing local data. They supply these script I have posted up, so I cannot comment on why they picked what they did. We just run a job that pulls the data onto our server.
Having columns with unique values has always been of high value on any dataset. This constrain can be added to any column, or index.
The error you receive is very clear and very specific. It literally gives the answer.
The statement has been terminated. Cannot insert duplicate key row in object 'dbo.Users' with unique index 'IX_tblUsers_OpID'.
It says "NO duplicates.... UNIQUE index..." then it tells you the name of the constrain "IX_tblUsers_OpID".
Now keeping that in mind, you are trying to insert in that column values you craft on the fly by concatenating two strings; name, plus last name.
What are the chances to come up with two of them being "John Smith"? High, very high!
Possible solutions:
You may remove the constrain and allow duplicates.
Modify the query so the values that tries to insert are -indeed- unique.
Use 'WITH (IGNORE_DUP_KEY = ON)' Reference: index_option (Transact-SQL)
Another guy here at work found this hidden feature, which solves the immediate problem but could cause other unknown issues.
In the Users table designer view, we can Right-Click on the OpID column, select Indexes/Keys..., locate this created IX_tblUsers_OpID key and change it's Is Unique value:
That seemed to have made it so that the DTS Package will run, and that is what we have going on right now.
I went back to the original EmployeeInfo table on our SQL 2000 Server to check for duplicate OpID values using this script:
select FirstName + ' ' + LastName as 'OpID',
Count(FirstName + ' ' + LastName) as 'Occurrences'
from EmployeeInfo
group by FirstName + ' ' + LastName
having 1 < count(FirstName + ' ' + LastName)
...but there were no records returned.
I'm not sure why the DTS Package was failing or why we had to turn off the Unique feature.
If anyone, at some time down the road, comes up with a better fix for this, please post!

Sql Server 2005 Primary Key violation on an Identity column

I’m running into an odd problem, and I need some help trying to figure it out.
I have a database which has an ID column (defined as int not null, Identity, starts at 1, increments by 1) in addition to all the application data columns. The primary key for the table is the ID column, no other components.
There is no set of data I can use as a "natural primary key" since the application has to allow for multiple submissions of the same data.
I have a stored procedure, which is the only way to add new records into the table (other than logging into the server directly as the db owner)
While QA was testing the application this morning, they to enter a new record into the database (using the application as it was intended, and as they have been doing for the last two weeks) and encountered a primary key violation on this table.
This is the same way I've been doing Primary Keys for about 10 years now, and have never run across this.
Any ideas on how to fix this? Or is this one of those cosmic ray glitches that shows up once in a long while.
Thanks for any advice you can give.
Nigel
Edited at 1:15PM EDT June 12th, to give more information
A simplified version of the schema...
CREATE TABLE [dbo].[tbl_Queries](
[QueryID] [int] IDENTITY(1,1) NOT NULL,
[FirstName] [varchar](50) NOT NULL,
[LastName] [varchar](50) NOT NULL,
[Address] [varchar](150) NOT NULL,
[Apt#] [varchar](10) NOT NULL
... <12 other columns deleted for brevity>
[VersionCode] [timestamp] NOT NULL,
CONSTRAINT [PK_tbl_Queries] PRIMARY KEY CLUSTERED
(
[QueryID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
(also removed the default value statements)
The stored procedure is as follows
insert into dbo.tbl_Queries
( FirstName,
LastName,
[Address],
[Apt#]...) values
( #firstName,
#lastName,
#address,
isnull(#apt, ''), ... )
It doesn't even look at the identity column, doesn't use IDENTITY, ##scope_identity or anything similar, it's just a file and forget.
I am as confident as I can be that the identity value wasn't reset, and that no-one else is using direct database access to enter values. The only time in this project that identity insert is used is in the initial database deployment to setup specific values in lookup tables.
The QA team tried again right after getting the error, and was able to submit a query successfully, and they have been trying since then to reproduce it, and haven't succeeded so far.
I really do appreciate the ideas folks.
Sounds like the identity seed got corrupted or reset somehow. Easiest solution will be to reset the seed to the current max value of the identity column:
DECLARE #nextid INT;
SET #nextid = (SELECT MAX([columnname]) FROM [tablename]);
DBCC CHECKIDENT ([tablename], RESEED, #nextid);
While I don't have an explanation as to a potential cause, it is certinaly possible to change the seed value of an identity column. If the seed were lowered to where the next value would already exist in the table, then that could certainly cause what you're seeing. Try running DBCC CHECKIDENT (table_name) and see what it gives you.
For more information, check out this page
Random thought based on experience
Have you synched data with, say, Red Gate Data Compare. This has an option to reseed identity columns. It's caused issues for use. And another project last month.
You may also have explicitly loaded/synched IDs too.
Maybe someone insert some records logging into the server directly using a new ID explicity, then when the identity auto increment field reach this number a primary key violation happened.
But The cosmic ray is algo a good explanation ;)
Just to make very, very sure...you aren't using an IDENTITY_INSERT in your stored procedure are you? Some logic like this:
declare #id int;
Set #id=Select Max(IDColumn) From Sometable;
SET IDENTITY_INSERT dbo.SomeTable ON
Insert (IDColumn, ...others...) Values (#id+1, ...others...);
SET IDENTITY_INSERT dbo.SomeTable OFF
.
.
.
I feel sticky just typing it. But every once in awhile you run across folks that just never quite understood what an Identity column is all about and I want to make sure that this is ruled out. By the way: if this is the answer, I won't hold it against you if just delete the question and never admit that this was your problem!
Can you tell that I hire interns every summer?
Are you using functions like ##identity or scope_identity() in any of your procedures? if your table has triggers or multiple inserts you could be getting back the wrong identity value for the table you want
Hopefully that is not the case, but there is a known bug in SQL 2005 with SCOPE_IDENTITY():
http://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=328811
The Primary Key violation is not necessarily coming from that table.
Does the application touch any other tables or call any other stored procedures for that function? Are there any triggers on the table? Or does the stored procedure itself use any other tables or stored procedures?
In particular, an Auditing table or trigger could cause this.

Resources