Problem with Indexed View in SQL Server, Error 8646 - sql-server

I was just prototyping a new system for deferring certain operations until out of hours on one of our databases. I've come up with (what I think) a pretty simple schema. I was first prototyping on SQL Server 2005 Express, but have confirmed the same problem on 2008 Developer. The error I'm getting is:
Msg 8646, Level 21, State 1, Procedure
Cancel, Line 6 Unable to find index
entry in index ID 1, of table
277576027, in database 'xxxxxx'. The
indicated index is corrupt or there is
a problem with the current update
plan. Run DBCC CHECKDB or DBCC
CHECKTABLE. If the problem persists,
contact product support.
The schema I'm using is:
create schema Writeback authorization dbo
create table Deferrals (
ClientID uniqueidentifier not null,
RequestedAt datetime not null,
CompletedAt datetime null,
CancelledAt datetime null,
ResolvedAt as ISNULL(CompletedAt,CancelledAt) persisted,
constraint PK_Writeback_Deferrals PRIMARY KEY (ClientID,RequestedAt) on [PRIMARY],
constraint CK_Writeback_Deferrals_NoTimeTravel CHECK ((RequestedAt <= CompletedAt) AND (RequestedAt <= CancelledAt)),
constraint CK_Writeback_Deferrals_NoSchrodinger CHECK ((CompletedAt is null) or (CancelledAt is null))
/* TODO:FOREIGN KEY */
)
create view Pending with schemabinding as
select
ClientID
from
Writeback.Deferrals
where
ResolvedAt is null
go
alter table Writeback.Deferrals add constraint
DF_Writeback_Deferrals_RequestedAt DEFAULT CURRENT_TIMESTAMP for RequestedAt
go
create unique clustered index PK_Writeback_Pending on Writeback.Pending (ClientID)
go
create procedure Writeback.Defer
#ClientID uniqueidentifier
as
set nocount on
insert into Writeback.Deferrals (ClientID)
select #ClientID
where not exists(select * from Writeback.Pending where ClientID = #ClientID)
go
create procedure Writeback.Cancel
#ClientID uniqueidentifier
as
set nocount on
update
Writeback.Deferrals
set
CancelledAt = CURRENT_TIMESTAMP
where
ClientID = #ClientID and
CompletedAt is null and
CancelledAt is null
go
create procedure Writeback.Complete
#ClientID uniqueidentifier
as
set nocount on
update
Writeback.Deferrals
set
CompletedAt = CURRENT_TIMESTAMP
where
ClientID = #ClientID and
CompletedAt is null and
CancelledAt is null
go
And the code that provokes the error is as follows:
declare #ClientA uniqueidentifier
declare #ClientB uniqueidentifier
select #ClientA = newid(),#ClientB = newid()
select * from Writeback.Pending
exec Writeback.Defer #ClientA
select * from Writeback.Pending
exec Writeback.Defer #ClientB
select * from Writeback.Pending
exec Writeback.Cancel #ClientB --<-- Error being raised here
select * from Writeback.Pending
exec Writeback.Complete #ClientA
select * from Writeback.Pending
select * from Writeback.Deferrals
I've seen a few others encountering such problems, but they seem to either have aggregates in their views (and a message back from MS saying they'd remove the ability to create such indexed views in 2005 SP 1), or they resolved it by applying a merge join in their join clause (but I don't have one).
Initially there was no computed column in the Deferrals table, and the where clause in the view was testing the CompletedAt and CancelledAt columns for NULL separately. But I changed to the above just to see if I could provoke different behaviour.
All of my SET options look right for using indexed views, and if they weren't, I'd expect a less violent error to be thrown.
Any ideas?

you have index corruption. run checkdb and see what errors it gives you. the easiest thing you could do is to rebuild your indexes.
also take a look at this KB article if it applies to your sitution.
Also note that putting a primary key on a GUID column will create a clustered index on it which is the worst thing performance wise you could do.

I managed to work out what's causing this error, by trying to build up this script, from scratch, adding in pieces as I went.
It's some kind of bug that's produced if the view is created as part of a CREATE SCHEMA statement. If I separate the CREATE SCHEMA into it's own batch, and then create the table and view in separate batches, everything works fine.
Long overdue edit - I raised this on Connect here. It was confirmed as being an issue in SQL Server 2008.
Internal builds (in 2010) indicated it was no longer an issue, and I have (just now, 2016) confirmed that the script in the question does not generate the same error in SQL Server 2012. The fix was not back-ported to SQL Server 2008.

Related

Can I determine when a Azure SQL DB row was last updated? [duplicate]

I need to create a new DATETIME column in SQL Server that will always contain the date of when the record was created, and then it needs to automatically update whenever the record is modified. I've heard people say I need a trigger, which is fine, but I don't know how to write it. Could somebody help with the syntax for a trigger to accomplish this?
In MySQL terms, it should do exactly the same as this MySQL statement:
ADD `modstamp` timestamp NULL
DEFAULT CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP
Here are a few requirements:
I can't alter my UPDATE statements to set the field when the row is modified, because I don't control the application logic that writes to the records.
Ideally, I would not need to know the names of any other columns in the table (such as the primary key)
It should be short and efficient, because it will happen very often.
SQL Server doesn't have a way to define a default value for UPDATE.
So you need to add a column with default value for inserting:
ADD modstamp DATETIME2 NULL DEFAULT GETDATE()
And add a trigger on that table:
CREATE TRIGGER tgr_modstamp
ON **TABLENAME**
AFTER UPDATE AS
UPDATE **TABLENAME**
SET ModStamp = GETDATE()
WHERE **ID** IN (SELECT DISTINCT **ID** FROM Inserted)
And yes, you need to specify a identity column for each trigger.
CAUTION: take care when inserting columns on tables where you don't know the code of the application. If your app have INSERT VALUES command without column definition, it will raise errors even with default value on new columns.
This is possible since SQL Server 2016 by using PERIOD FOR SYSTEM_TIME.
This is something that was introduced for temporal tables but you don't have to use temporal tables to use this.
An example is below
CREATE TABLE dbo.YourTable
(
FooId INT PRIMARY KEY CLUSTERED,
FooName VARCHAR(50) NOT NULL,
modstamp DATETIME2 GENERATED ALWAYS AS ROW START NOT NULL,
MaxDateTime2 DATETIME2 GENERATED ALWAYS AS ROW END HIDDEN NOT NULL,
PERIOD FOR SYSTEM_TIME (modstamp,MaxDateTime2)
)
INSERT INTO dbo.YourTable (FooId, FooName)
VALUES (1,'abc');
SELECT *
FROM dbo.YourTable;
WAITFOR DELAY '00:00:05'
UPDATE dbo.YourTable
SET FooName = 'xyz'
WHERE FooId = 1;
SELECT *
FROM dbo.YourTable;
DROP TABLE dbo.YourTable;
It has some limitations.
The time stored will be updated by the system and always be UTC.
There is a need to declare a second column (MaxDateTime2 above) that is completely superfluous for this use case. But it can be marked as hidden making it easier to ignore.
Okay, I always like to keep track of not only when something happened but who did it!
Lets create a test table in [tempdb] named [dwarfs]. At a prior job, a financial institution, we keep track of inserted (create) date and updated (modify) date.
-- just playing
use tempdb;
go
-- drop table
if object_id('dwarfs') > 0
drop table dwarfs
go
-- create table
create table dwarfs
(
asigned_id int identity(1,1),
full_name varchar(16),
ins_date datetime,
ins_name sysname,
upd_date datetime,
upd_name sysname,
);
go
-- insert/update dates
alter table dwarfs
add constraint [df_ins_date] default (getdate()) for ins_date;
alter table dwarfs
add constraint [df_upd_date] default (getdate()) for upd_date;
-- insert/update names
alter table dwarfs
add constraint [df_ins_name] default (coalesce(suser_sname(),'?')) for ins_name;
alter table dwarfs
add constraint [df_upd_name] default (coalesce(suser_sname(),'?')) for upd_name;
go
For updates, but the inserted and deleted tables exist. I choose to join on the inserted for the update.
-- create the update trigger
create trigger trg_changed_info on dbo.dwarfs
for update
as
begin
-- nothing to do?
if (##rowcount = 0)
return;
update d
set
upd_date = getdate(),
upd_name = (coalesce(suser_sname(),'?'))
from
dwarfs d join inserted i
on
d.asigned_id = i.asigned_id;
end
go
Last but not least, lets test the code. Anyone can type a untested TSQL statement in. However, I always stress testing to my team!
-- remove data
truncate table dwarfs;
go
-- add data
insert into dwarfs (full_name) values
('bilbo baggins'),
('gandalf the grey');
go
-- show the data
select * from dwarfs;
-- update data
update dwarfs
set full_name = 'gandalf'
where asigned_id = 2;
-- show the data
select * from dwarfs;
The output. I only waited 10 seconds between the insert and the delete. Nice thing is that who and when are both captured.
Create trigger tr_somename
On table_name
For update
As
Begin
Set nocount on;
Update t
Set t.field_name = getdate()
From table_name t inner join inserted I
On t.pk_column = I.pk_column
End
ALTER TRIGGER [trg_table_name_Modified]
ON [table_name]
AFTER UPDATE
AS
Begin
UPDATE table_name
SET modified_dt_tm = GETDATE() -- or use SYSDATETIME() for 2008 and newer
FROM Inserted i
WHERE i.ID = table_name.id
end

Running an alter table alter column statement more than once in SQL Server

Are there any negative implications to running an alter table alter column statement more than once in SQL Server?
Say I alter a column's datatype and nullability like this:
--create table
create table Table1
(
Column1 varchar(50) not null
)
go
--insert some records
insert into Table1 values('a')
insert into Table1 values('b')
go
--alter once
alter table Table1
alter column Column1 nvarchar(250) not null
go
--alter twice
alter table Table1
alter column Column1 nvarchar(250) not null
go
The above set of sql all works and I have tested these. I could also test for the properties in the alter statements. The question is that is there any advantage to say checking if the column is not already nullable before altering.
After the first alter, does SQL Server figure out that the table has already been altered and hence the 2nd alter essentially does nothing?
Are there any differences across different versions of SQL Server about how this is handled?
Thanks,
Ilias
This is a metadata only operation.
It doesn't have to read or write any of the data pages belonging to Table1. It isn't quite a no-op though.
It will still start up a transaction, acquire a schema modification lock on the table and update the modified column in the row for this table in sys.sysschobjs (exposed to us through the modified_date column in sys.objects).
Moreover because the table has been modified any execution plans referencing the table will need to be recompiled on next usage.

How to Alter Column from nvarchar(max) to nvarchar(50)

I have an existing table in SQL SERVER 2008 with one of its column as NVARCHAR(MAX) and it only has values of less than 10 characters in it.
This table is in production and has data in it.
I have got a requirement wherein I have to Alter this column from NVARCHAR(MAX) to NVARCHAR(50). The SQL Server gives some Truncation error while doing this operation, even though the data in that column is less than 10 characters.
This is my script:
ALTER TABLE [dbo].[Table] ALTER COLUMN [Column1] NVARCHAR ( 50 ) NOT NULL
First Check Your table data with this query:
SELECT DATALENGTH(Column_Name) AS FIELDSIZE, Column_Name
FROM Table_Name
If everything is fine, you may have checked the Prevent Saving Changes option. Follow these steps to check:
Tools > Designers Uncheck Prevent saving changes that require table re-creation
If you are sure that you wouldn't lose data, then:
Update myTable set myNVMaxCol = left(coalesce(myNVMaxCol,''),50);
Alter table myTable alter column myNVMaxCol nvarchar(50) not null;

Column not found

I tried below in sql server management, in a single query.
alter table add column amount2
update table set amount2=amount
I am getting column amount2 not found.
Can anyone tell me why this error?
That is not valid syntax (misses table name and column datatype) but in management studio use the batch separator GO between adding a column to an existing table and statements referencing the new column anyway.
Or alternatively you can use EXEC to execute it in a child batch.
SQL Server tries to compile all statements in the batch before execution and this will fail when it encounters the statement using this column.
There's a couple things wrong here.
The correct syntax for adding a column is MSDN - ALTER TABLE
ALTER TABLE [TableName] ADD [ColumnNAME] [DataType]
'Table' is a Reserved Keyword in SQL Server, although it is possible to have a table named 'Table'. You need to include brackets when referencing it.
SELECT * FROM [Table]
All together, you need
ALTER TABLE [Table] ADD [Amount2] INT
GO -- See Martin's answer for reason why 'GO' is needed here
UPDATE [Table] SET [Amount2] = [Amount]
You can get around this problem like this:
-- Alter the table and add new column "NewColumn"
ALTER TABLE [MyTable] ADD [NewColumn] CHAR(1) NULL;
-- Set the value of NewColumn
EXEC ('UPDATE [MyTable] SET [NewColumn] = ''A'' ');

Tables created by default in user schema

In Sql Server 2008, when I create a table without schema prefix:
create table mytable ( id int identify )
it usually ends up in the schema dbo, with name dbo.mytable.
However, on one of our servers, the tabel ends up belonging to me:
andomar.mytable
Which configuration difference could explain this?
It depends what your default schema is within that database. Even in SQL Server 2005, if your default schema in that one database is andomar, then any tables created without an explicit schema will end up there.
Check the user properties in that database (not the login properties) and see what the default schema is.
If you don't define schema in which you create table it will always use default one.
you can create it like this:
USE DataBaseName -- define database to use
GO
BEGIN TRAN - if you will have any error everything will roll back
CREATE TABLE testovi.razine - schema name is "testovi" and tablename is "razine"
(
id INT NOT NULL IDENTITY(1,1),
razina NVARCHAR(50) NULL,
razinaENG NVARCHAR(50) NULL,
kreirao UNIQUEIDENTIFIER NULL,
VrijemeKreiranja DATETIME NULL
)
ON [PRIMARY]
GO
When you create table always set constraint and index on column most used for transaction
ALTER TABLE testovi.razine ADD CONSTRAINT
PK_mat_razine PRIMARY KEY CLUSTERED
(id) WITH(IGNORE_DUP_KEY=OFF, --check duplicate and don't ignore if try to insert one
STATISTICS_NORECOMPUTE=OFF, -- important for statistic update and query optimization
ALLOW_PAGE_LOCKS=ON) -* I believe that this is default, but always put it to on if not
ON [PRIMARY]
GO
if ##error<>0
BEGIN
ROLBACK TRAN
END
ELSE
BEGIN
COMMIT TRAN --if everything passed o.k. table will be created
END
If you want to set default schema you have to know that it is user based default so you can set it with code:
USE espabiz -- database;
ALTER USER YourUserName WITH DERAULT_SCHEMA = SchemaName; -- SchemaName is default schema for defined user
Ping if you need additional help or mark answer it you find it usable :)

Resources