I want a database level block on adding more than 3 instances of an id into a column in a mapping table. I do not want to use a trigger or add any non-computed column to the table. I tried an indexed view but I can't use HAVING or CAST in the query as the two non-valid examples below show. Any ideas?
CREATE VIEW VW WITH SCHEMABINDING AS
SELECT col1, CAST(COUNT_BIG(*)+252 AS TINYINT) a
FROM tbl1 GROUP BY col1
CREATE VIEW VW WITH SCHEMABINDING AS
SELECT col1, COUNT_BIG(*), CAST(256 AS TINYINT) a
FROM tbl1 GROUP BY col1 HAVING COUNT(*)>3
you can create a function that have an id as parameters and count how many id there are in the table
ALTER FUNCTION [dbo].[test](#id integer)
RETURNS int
AS
BEGIN
DECLARE #retval int
SELECT #retval = COUNT(*) FROM table where id = #id
RETURN #retval
END
then you can add a check constraint in the table that check if id <= 3
the test for check constraint will be
dbo.test(id) <= 3
this check is for add rows and update.
Related
I have a table similar to this:
CREATE TABLE dbo.SomeTable (Work_ID VARCHAR(9));
I need to be able to run the following query and unfortunately can not change the data type of the Work_ID column:
SELECT Work_ID
FROM dbo.SomeTable
WHERE WorkID >= 100 AND WorkID <=200
This of course will give me an implicite conversion and cause a table scan (several million rows).
My thought was to put the following indexed view on it.
CREATE VIEW [dbo].[vw_Work_ID]
WITH SCHEMABINDING AS
SELECT CAST(q.Work_ID as INT) as Work_ID
FROM dbo.SomeTable q
GO
CREATE UNIQUE CLUSTERED INDEX [cl_vw_Work_ID] ON [dbo].[vw_Work_ID]
(
[Work_ID] ASC
)
GO
When I now run
SELECT Work_ID FROM dbo.vw_Work_ID WHERE WorkID >= 100 AND WorkID <=200
``
I still get IMPLICIT CONVERSION and a table scan. Any solutions?
Use TRY_CAST instead of CAST to avoid conversion errors. The resultant value will be NULL for invalid integer values. Also, add a NOEXPAND hint so to use the view index:
CREATE TABLE dbo.SomeTable (Work_ID VARCHAR(9));
GO
CREATE VIEW [dbo].[vw_Work_ID]
WITH SCHEMABINDING AS
SELECT TRY_CAST(q.Work_ID as INT) as Work_ID
FROM dbo.SomeTable q;
GO
CREATE UNIQUE CLUSTERED INDEX [cl_vw_Work_ID] ON [dbo].[vw_Work_ID]
(
[Work_ID] ASC
);
GO
INSERT INTO dbo.SomeTable VALUES('111');
INSERT INTO dbo.SomeTable VALUES('xxx');
GO
SELECT *
FROM [dbo].[vw_Work_ID] WITH(NOEXPAND)
WHERE Work_ID = 0;
GO
We can index on the column using the same WHERE conditions as we want to use in the query. When we check with XML Stastics on we see that the query has been run on the index and has not done a table scan.
Please see the dbFiddle link for confirmation of the query plan.
SET STATISTICS XML ON;
CREATE TABLE dbo.SomeTable (Work_ID VARCHAR(9));
SELECT * FROM dbo.SomeTable
WHERE Work_ID >= '100' AND Work_ID <='200';
The query plan shows a full table scan
create index [cl_vw_Work_ID]
on [dbo].[SomeTable](Work_ID)
WHERE Work_ID >= '100' AND Work_ID <='200';
SELECT * FROM dbo.SomeTable
WHERE Work_ID >= '100' AND Work_ID <='200';
The query plan shows an index scan and no table scan
db<>fiddle here
UPDATE
following the comment that the values 100 and 200 are not fixed I have tried creating a virtual column casting the Work_ID to int and created an index on it. The query plan is showing a full table scan, even when I insert 4000 rows.
ALTER TABLE dbo.SomeTable
ADD num_work_id AS CAST(Work_ID AS INT)
;
create index [cl_vw_Work_ID]
on [dbo].[SomeTable](num_work_id)
;
SELECT * FROM dbo.SomeTable
WHERE num_work_id >= '100' AND num_work_id <='200';
Work_ID | num_work_id
:------ | ----------:
db<>fiddle here
with 4000 rows db<>fiddle here
Is there a way to set IDENTITY_INSERT ON for table valued type? The way how it is done with tables - isn't working.
CREATE TYPE dbo.tvp_test AS TABLE
(
id INT NOT NULL IDENTITY(1, 1),
a INT NULL
);
GO
DECLARE #test dbo.tvp_test;
SET IDENTITY_INSERT #test ON;
INSERT INTO #test VALUES (1, 1);
DROP TYPE dbo.tvp_test;
Error:
Msg 102, Level 15, State 1, Line 13
Incorrect syntax near '#test'
Is there a way to set IDENTITY_INSERT ON for table valued type?
TL;DR: No.
SET IDENTITY_INSERT is a command to be used against a table object, not a variable. SET IDENTITY_INSERT (Transact-SQL):
SET IDENTITY_INSERT (Transact-SQL)
Allows explicit values to be inserted into the identity column of a table.
##Syntax
SET IDENTITY_INSERT [ [ database_name . ] schema_name . ] table_name { ON | OFF }
Arguments
database_name
Is the name of the database in which the specified table resides.
schema_name
Is the name of the schema to which the table belongs.
table_name
Is the name of a table with an identity column.
Notice that this makes no reference to a variable at all; that's because it can't be used against one.
If you do need two versions of a Table Type, one that allows explicit values of its ID column, and the other that uses an IDENTITY, you will need to define 2 table types; one with an IDENTITY property and the other without:
CREATE TYPE dbo.tvp_test_i AS TABLE (id INT NOT NULL IDENTITY(1, 1),
a INT NULL);
CREATE TYPE dbo.tvp_test_ni AS TABLE (id INT NOT NULL,
a INT NULL);
GO
DECLARE #i dbo.tvp_test_i;
INSERT INTO #i (a)
VALUES(17),(21);
DECLARE #ni dbo.tvp_test_ni;
INSERT INTO #ni (id,a)
VALUES(3,95),(5,34);
SELECT *
FROM #i;
SELECT *
FROM #ni;
You could accommodate both ids (auto-generated and/or manually specified) in the same table type with some added overhead:
CREATE TYPE dbo.tvp_test_xyz AS TABLE
(
autoid INT NOT NULL IDENTITY(1, 1), --auto generated id
manualid int null, --manually inserted id (filled in when needed)
id as isnull(manualid, autoid) unique, --the final id, this is used in queries etc....
a INT NULL
);
GO
declare #t as dbo.tvp_test_xyz ;
--case, use autogenerated id
insert into #t(a)
select top (100) object_id
from sys.all_objects;
--id = autoid
select *
from #t;
select * from #t
where id between 10 and 20;
--case, manual id
delete from #t;
insert into #t(manualid, a)
select top (100) row_number() over(order by name desc), object_id
from sys.all_objects;
--id = manualid
select * from #t;
select * from #t
where id between 10 and 20;
I have table holding items for a given list id in my Ms Sql server database (2008R2).
I would like to add constraints so that no two list ids have same item list. Below illustrate my schema.
ListID , ItemID
1 a
1 b
2 a
3 a
3 b
In above example ListID 3 should fail. I guess you can't put constarint/check within the database itself (Triggers,check) and the logic constaint can only be done from the frontend?
Thanks in advance for any help.
Create a function that performs the logic you want and then create a check constraint or index that leverages that function.
Here is a functional example, the final insert fails. The function is evaluated row by row, so if you need to insert as a set and evaluate after, you'd need to do an "instead of" trigger:
CREATE TABLE dbo.Test(ListID INT, ItemID CHAR(1))
GO
CREATE FUNCTION dbo.TestConstraintPassed(#ListID INT, #ItemID CHAR(1))
RETURNS TINYINT
AS
BEGIN
DECLARE #retVal TINYINT = 0;
DECLARE #data TABLE (ListID INT, ItemID CHAR(1),[Match] INT)
INSERT INTO #data(ListID,ItemID,[Match]) SELECT ListID,ItemID,-1 AS [Match] FROM dbo.Test
UPDATE #data
SET [Match]=1
WHERE ItemID IN (SELECT ItemID FROM #data WHERE ListID=#ListID)
DECLARE #MatchCount INT
SELECT #MatchCount=SUM([Match]) FROM #data WHERE ListID=#ListID
IF NOT EXISTS(
SELECT *
FROM (
SELECT ListID,SUM([Match]) AS [MatchCount]
FROM #data
WHERE ListID<>#ListID
GROUP BY ListID
) dat
WHERE #MatchCount=[MatchCount]
)
BEGIN
SET #retVal=1;
END
RETURN #retVal;
END
GO
ALTER TABLE dbo.Test
ADD CONSTRAINT chkTest
CHECK (dbo.TestConstraintPassed(ListID, ItemID) = 1);
GO
INSERT INTO dbo.Test(ListID,ItemID) SELECT 1,'a'
INSERT INTO dbo.Test(ListID,ItemID) SELECT 1,'b'
INSERT INTO dbo.Test(ListID,ItemID) SELECT 2,'a'
INSERT INTO dbo.Test(ListID,ItemID) SELECT 2,'b'
Related
How could I prevent a value from being entered that is a prefix of another value in the same column? For example, if MyTable.NumberPrefix already contains abc then ab can't be added.
My first attempt (below) was to use an indexed view. But a unique index cannot be created on a view that uses a derived table (and I can't figure out how to write the view without it).
create view MyTable
with schemabinding
as
select
left(a.NumberPrefix, b.Length) as CommonPrefix
from
dbo.MyTable a
cross join
(
select distinct
len(NumberPrefix) as Length
from
dbo.MyTable
) b
create unique clustered index MyIndex on MyTable (CommonPrefix) --ERROR
Some test data:
insert MyTable (NumberPrefix) values ('abc') -- OK
insert MyTable (NumberPrefix) values ('ab') -- Error
insert MyTable (NumberPrefix) values ('a') -- Error
insert MyTable (NumberPrefix) values ('abd') -- OK
insert MyTable (NumberPrefix) values ('abcd') -- Error
Use check constraint with user defined function:
create function fnPrefix(#prefix varchar(100))
returns bit
as
begin
if (select count(*) from MyTable
where MyColumn like #prefix + '%' or #prefix like MyColumn + '%') > 1
return 0
return 1
end
Then add constraint:
alter table MyTable
add constraint chkPrefix check(dbo.fnPrefix(MyColumn) = 1)
SQL SERVER 2000:
I have a table with test data (about 100000 rows), I want to update a column value from another table with some random data from another table. According to this question, This is what I am trying:
UPDATE testdata
SET type = (SELECT TOP 1 id FROM testtypes ORDER BY CHECKSUM(NEWID()))
-- or even
UPDATE testdata
SET type = (SELECT TOP 1 id FROM testtypes ORDER BY NEWID())
However, the "type" field is still with the same value for all rows; Any ideas what Am I doing wrong?
[EDIT]
I would expect this query to return one different value for each row, but it doesn't:
SELECT testdata.id, (SELECT TOP 1 id FROM testtypes ORDER BY CHECKSUM(NEWID())) type
FROM testdata
-- however seeding a rand value works
SELECT testdata.id, (SELECT TOP 1 id FROM testtypes ORDER BY CHECKSUM(NEWID()) + RAND(testdata.id)) type
FROM testdata
Your problem is: you are selecting only a single value and then updating all columns with that one single value.
In order to really get a randomization going, you need to do a step-by-step / looping approach - I tried this in SQL Server 2008, but I think it should work in SQL Server 2000 as well:
-- declare a temporary TABLE variable in memory
DECLARE #Temporary TABLE (ID INT)
-- insert all your ID values (the PK) into that temporary table
INSERT INTO #Temporary SELECT ID FROM dbo.TestData
-- check to see we have the values
SELECT COUNT(*) AS 'Before the loop' FROM #Temporary
-- pick an ID from the temporary table at random
DECLARE #WorkID INT
SELECT TOP 1 #WorkID = ID FROM #Temporary ORDER BY NEWID()
WHILE #WorkID IS NOT NULL
BEGIN
-- now update exactly one row in your base table with a new random value
UPDATE dbo.TestData
SET [type] = (SELECT TOP 1 id FROM dbo.TestTypes ORDER BY NEWID())
WHERE ID = #WorkID
-- remove that ID from the temporary table - has been updated
DELETE FROM #Temporary WHERE ID = #WorkID
-- first set #WorkID back to NULL and then pick a new ID from
-- the temporary table at random
SET #WorkID = NULL
SELECT TOP 1 #WorkID = ID FROM #Temporary ORDER BY NEWID()
END
-- check to see we have no more IDs left
SELECT COUNT(*) AS 'After the update loop' FROM #Temporary
you need to enforce a per row calculation in the selection of the new ids ..
this would do the trick
UPDATE testdata
SET type = (SELECT TOP 1 id FROM testtypes ORDER BY outerTT*CHECKSUM(NEWID()))
FROM testtypes outerTT