Computed Column that doesn't auto update - sql-server

I have a computed column that is automatically creating a confirmation number by adding the current max ID to some Prefix. It works, but not exactly how I need it to work.
This is the function
ALTER FUNCTION [dbo].[SetEPNum](#IdNum INT)
RETURNS VARCHAR(255)
AS
BEGIN
return (select 'SomePrefix' + RIGHT('00000' + CAST(MAX(IdNum) AS VARCHAR(255)), 5)
FROM dbo.someTable
/*WHERE IdNum = #IdNum*/)
END
If I add WHERE IdNum = #IdNum to the select in the function, that gives the illusion of working, but in reality it is picking the max IdNUM from the one row where IDNum = #IdNum rather than actually picking the current max IDNUM from all IDNums. If I remove the where statement, the computed function simply sets every field to the max Id every time it changes.
This is the table
CREATE TABLE [dbo].[someTable](
[IdNum] [int] IDENTITY(1,1) NOT NULL,
[First_Name] [varchar](50) NOT NULL,
[Last_Name] [varchar](50) NOT NULL,
[EPNum] AS ([dbo].[SetEPNum]([IdNum]))
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
This is the computed column
ALTER TABLE dbo.someTable
ADD EPNum AS dbo.SetEPnum(IdNum)
Is there any way to accomplish this? If not, is there an alternative solution?

If my understanding is correct, you try to get the max id of some table to appear next to each record at the time it was updated?
Right now you get the same max id next to all records.
That is because the max id is one and only one. You have provided no context.
It seems to me this is the job of a trigger or even your update statement. Why employ computed columns? The computed column gets recomputed every time you display the data.
If you absolutely need to go this way, you should employ some other field (e.g. modification date) and get the max id from those records that were updated before the current. It all depends though on the business logic of your application and what you try to achieve.

Related

How to define a view that will give zeros on days with no recording?

I have a couple tables with a view that joins them
CREATE TABLE Valve (
ValveID int IDENTITY NOT NULL,
ValveName varchar(100) NOT NULL,
ValveOwner varchar(100) NOT NULL
)
CREATE TABLE ValveRecording (
ValveID int IDENTITY NOT NULL,
Date date NOT NULL,
Measure varchar(100) NOT NULL,
Value numeric NOT NULL
)
ALTER VIEW ValveRecordingView
AS
SELECT
v.ValveName,
v.ValveOwner,
vr.Measure,
vr.Value
FROM Valve v
LEFT OUTER JOIN ValveRecording vr on v.ValveID = vr.ValveID
*Note above was just typed in - so may have errors.
The problem with the view is that if queried for a date range where no measurement was made,
there is no row present for the measure-value pair. I would like to
restate the view in a way that some value is returned for each date-measure-value tuples.
If one wasn't present then return a zero.
I think that it's possible, but the SQL is a bit beyond me. I assume
that it involves a UNION ALL with a query that gets date-measure-value
tuples which aren't already present. Probably an ugly query but that's
okay.

SQL Server - Order Identity Fields in Table

I have a table with this structure:
CREATE TABLE [dbo].[cl](
[ID] [int] IDENTITY(1,1) NOT NULL,
[NIF] [numeric](9, 0) NOT NULL,
[Name] [varchar](80) NOT NULL,
[Address] [varchar](100) NULL,
[City] [varchar](40) NULL,
[State] [varchar](30) NULL,
[Country] [varchar](25) NULL,
Primary Key([ID],[NIF])
);
Imagine that this table has 3 records. Record 1, 2, 3...
When ever I delete Record number 2 the IDENTITY Field generates a Gap. The table then has Record 1 and Record 3. Its not correct!
Even if I use:
DBCC CHECKIDENT('cl', RESEED, 0)
It does not solve my problem becuase it will set the ID of the next inserted record to 1. And that's not correct either because the table will then have a multiple ID.
Does anyone has a clue about this?
No database is going to reseed or recalculate an auto-incremented field/identity to use values in between ids as in your example. This is impractical on many levels, but some examples may be:
Integrity - since a re-used id could mean records in other systems are referring to an old value when the new value is saved
Performance - trying to find the lowest gap for each value inserted
In MySQL, this is not really happening either (at least in InnoDB or MyISAM - are you using something different?). In InnoDB, the behavior is identical to SQL Server where the counter is managed outside of the table, so deleted values or rolled back transactions leave gaps between last value and next insert. In MyISAM, the value is calculated at time of insertion instead of managed through an external counter. This calculation is what is giving the perception of being recalcated - it's just never calculated until actually needed (MAX(Id) + 1). Even this won't insert inside gaps (like the id = 2 in your example).
Many people will argue if you need to use these gaps, then there is something that could be improved in your data model. You shouldn't ever need to worry about these gaps.
If you insist on using those gaps, your fastest method would be to log deletes in a separate table, then use an INSTEAD OF INSERT trigger to perform the inserts with your intended keys by first looking for records in these deletions table to re-use (then deleting them to prevent re-use) and then using the MAX(Id) + 1 for any additional rows to insert.
I guess what you want is something like this:
create table dbo.cl
(
SurrogateKey int identity(1, 1)
primary key
not null,
ID int not null,
NIF numeric(9, 0) not null,
Name varchar(80) not null,
Address varchar(100) null,
City varchar(40) null,
State varchar(30) null,
Country varchar(25) null,
unique (ID, NIF)
)
go
I added a surrogate key so you'll have the best of both worlds. Now you just need a trigger on the table to "adjust" the ID whenever some prior ID gets deleted:
create trigger tr_on_cl_for_auto_increment on dbo.cl
after delete, update
as
begin
update dbo.cl
set ID = d.New_ID
from dbo.cl as c
inner join (
select c2.SurrogateKey,
row_number() over (order by c2.SurrogateKey asc) as New_ID
from dbo.cl as c2
) as d
on c.SurrogateKey = d.SurrogateKey
end
go
Of course this solution also implies that you'll have to ensure (whenever you insert a new record) that you check for yourself which ID to insert next.

Auto increment a bigint column?

I want a bigint ID column for every row of data that i insert into a table. I want Sql server to generate the numbers. I tried to create a table with a bigint column ID. I want this to be autoincrement with the first value as 1. I tried using [ID] [bigint] AUTO_INCREMENT NOT NULL, in my create table statement, but I got the error - Incorrect syntax near 'AUTO_INCREMENT'. How do I do this ?
Can you not just declare it as an IDENTITY column:
[ID] [bigint] IDENTITY(1,1) NOT NULL;
The 1,1 refers to the start index and the amount it is being incremented by.
NOTE: You do not have to provide a value for the ID column when you do an insert. It will automatically choose it. You can modify these values later if required.
EDIT:
Alternatively, you can use a stored procedure to handle all the inserts.
Example:
Stored Procedure will take in variables as you would a normal insert (one variable for every column). The logic within the stored procedure can select the max value currently existing in the table and choose that as its max value.
DECLARE #yourVariable = SELECT MAX(ID) FROM YourTable
Use #yourVariable as your insert value. You can increment it or change value as necessary.
I got the answer here - http://www.sqlservercentral.com/Forums/Topic1512425-149-1.aspx
CREATE TABLE Test (
ID BIGINT IDENTITY NOT NULL,
SomeOtherColumn char(1)
)
INSERT INTO Test (SomeOtherColumn)
values ('a')

Check constraint UDF with multiple input parameters not working

I'm trying to implement a check constraint on a table such that records can't be inserted where there exists a record for which two of the columns ("Int_1" and "Int_2") already have the value we're trying to insert E.g.:
ID Name Int_1 Int_2
1 Dave 1 2
Inserting (2, Steve, 2, 2) into the table above would be okay, as would (3, Mike, 1, 3), but inserting values where Int_1 AND Int_2 already exist is not allowed, i.e. (4, Stuart, 1, 2) is illegal.
I thought defining my table thus would work:
CREATE TABLE [Table](
[ID] [int] IDENTITY(1,1) NOT NULL,
[Name] [varchar](255) NOT NULL,
[Int_1] [int] NOT NULL,
[Int_2] [int] NOT NULL,
CONSTRAINT [chk_Stuff] CHECK (dbo.chk_Ints(Int_1, Int_2)=1))
where: dbo.chk_Ints is defined:
CREATE FUNCTION [dbo].[chk_Ints](#Int_1 int,#Int_2 int)
RETURNS int
AS
BEGIN
DECLARE #Result int
IF NOT EXISTS (SELECT * FROM [Table] WHERE Int_1 = #Int_1 AND Int_2 = #Int_2)
BEGIN
SET #Result = 1
END
ELSE
BEGIN
SET #Result = 0
END
RETURN #Result
END
GO
When using the combo above, if I try to insert any record whatsoever, SQL tells me I've broken my check constraint. I can remove all rows from the table and try to insert a first record, and SQL tells me I've broken my constraint, which I can't possibly have done!
I've scoured the internet for quite a while now looking for examples of check constraints where the UDF depends on multiple table columns, but to no avail. Any ideas as to why this might not work?
Thanks in advance :)
Yes, this may seem baffling until you realise what's going on, at which point it becomes quite obvious.
The function is called for the values that are in the row you are trying to insert. But think of how the function is being called. It is a check constraint that calls it.
Next, think of the parameters being passed. Where do they come from? According to the definition, the check constraint takes them from columns Int_1 and Int_2.
So, it passes them as column values. But column values must belong to a row. Which row is it in this case? The one you are trying to insert!
That means your row is inserted at this point, only the transaction is still pending. And yet the fact that the row is in the table is crucial, because that's what the function finds and reports on with the 1 result.
Thus, what's happening is this:
you are trying to insert a row,
the function sees that row and says that a row with the given parameters already exists,
the check constraint "reacts" accordingly by prohibiting the insert,
the insert is rolled back.
Of course, now that you realise all that, it is easy to come up with a different logic of checking for duplicates. Basically, your function should "keep in mind" that the new row is already in the table, and so it should try and determine whether its presence in the table violates any rules that you want to establish. You could, for instance, count the rows matching the given parameters and see if the result is not greater than 1:
IF (SELECT COUNT(*) FROM [Table] WHERE Int_1 = #Int_1 AND Int_2 = #Int_2) < 2
BEGIN
SET #Result = 1
END
ELSE
BEGIN
SET #Result = 0
END
However, the entire idea of using a function in a check constraint for this job is very much inferior to just adding a unique constraint on the two columns, as suggested by #a_horse_with_no_name. Do this:
ALTER TABLE [Table]
ADD CONSTRAINT UQ_Table_Int1_Int2 UNIQUE (Int_1, Int_2);
and you can forget about duplicates.

SQL Server: how to constrain a table to contain a single row?

I want to store a single row in a configuration table for my application. I would like to enforce that this table can contain only one row.
What is the simplest way to enforce the single row constraint ?
You make sure one of the columns can only contain one value, and then make that the primary key (or apply a uniqueness constraint).
CREATE TABLE T1(
Lock char(1) not null,
/* Other columns */,
constraint PK_T1 PRIMARY KEY (Lock),
constraint CK_T1_Locked CHECK (Lock='X')
)
I have a number of these tables in various databases, mostly for storing config. It's a lot nicer knowing that, if the config item should be an int, you'll only ever read an int from the DB.
I usually use Damien's approach, which has always worked great for me, but I also add one thing:
CREATE TABLE T1(
Lock char(1) not null DEFAULT 'X',
/* Other columns */,
constraint PK_T1 PRIMARY KEY (Lock),
constraint CK_T1_Locked CHECK (Lock='X')
)
Adding the "DEFAULT 'X'", you will never have to deal with the Lock column, and won't have to remember which was the lock value when loading the table for the first time.
You may want to rethink this strategy. In similar situations, I've often found it invaluable to leave the old configuration rows lying around for historical information.
To do that, you actually have an extra column creation_date_time (date/time of insertion or update) and an insert or insert/update trigger which will populate it correctly with the current date/time.
Then, in order to get your current configuration, you use something like:
select * from config_table order by creation_date_time desc fetch first row only
(depending on your DBMS flavour).
That way, you still get to maintain the history for recovery purposes (you can institute cleanup procedures if the table gets too big but this is unlikely) and you still get to work with the latest configuration.
You can implement an INSTEAD OF Trigger to enforce this type of business logic within the database.
The trigger can contain logic to check if a record already exists in the table and if so, ROLLBACK the Insert.
Now, taking a step back to look at the bigger picture, I wonder if perhaps there is an alternative and more suitable way for you to store this information, perhaps in a configuration file or environment variable for example?
I know this is very old but instead of thinking BIG sometimes better think small use an identity integer like this:
Create Table TableWhatever
(
keycol int primary key not null identity(1,1)
check(keycol =1),
Col2 varchar(7)
)
This way each time you try to insert another row the check constraint will raise preventing you from inserting any row since the identity p key won't accept any value but 1
Here's a solution I came up with for a lock-type table which can contain only one row, holding a Y or N (an application lock state, for example).
Create the table with one column. I put a check constraint on the one column so that only a Y or N can be put in it. (Or 1 or 0, or whatever)
Insert one row in the table, with the "normal" state (e.g. N means not locked)
Then create an INSERT trigger on the table that only has a SIGNAL (DB2) or RAISERROR (SQL Server) or RAISE_APPLICATION_ERROR (Oracle). This makes it so application code can update the table, but any INSERT fails.
DB2 example:
create table PRICE_LIST_LOCK
(
LOCKED_YN char(1) not null
constraint PRICE_LIST_LOCK_YN_CK check (LOCKED_YN in ('Y', 'N') )
);
--- do this insert when creating the table
insert into PRICE_LIST_LOCK
values ('N');
--- once there is one row in the table, create this trigger
CREATE TRIGGER ONLY_ONE_ROW_IN_PRICE_LIST_LOCK
NO CASCADE
BEFORE INSERT ON PRICE_LIST_LOCK
FOR EACH ROW
SIGNAL SQLSTATE '81000' -- arbitrary user-defined value
SET MESSAGE_TEXT='Only one row is allowed in this table';
Works for me.
I use a bit field for primary key with name IsActive.
So there can be 2 rows at most and and the sql to get the valid row is:
select * from Settings where IsActive = 1
if the table is named Settings.
The easiest way is to define the ID field as a computed column by value 1 (or any number ,....), then consider a unique index for the ID.
CREATE TABLE [dbo].[SingleRowTable](
[ID] AS ((1)),
[Title] [varchar](50) NOT NULL,
CONSTRAINT [IX_SingleRowTable] UNIQUE NONCLUSTERED
(
[ID] ASC
)
) ON [PRIMARY]
You can write a trigger on the insert action on the table. Whenever someone tries to insert a new row in the table, fire away the logic of removing the latest row in the insert trigger code.
Old question but how about using IDENTITY(MAX,1) of a small column type?
CREATE TABLE [dbo].[Config](
[ID] [tinyint] IDENTITY(255,1) NOT NULL,
[Config1] [nvarchar](max) NOT NULL,
[Config2] [nvarchar](max) NOT NULL
IF NOT EXISTS ( select * from table )
BEGIN
///Your insert statement
END
Here we can also make an invisible value which will be the same after first entry in the database.Example:
Student Table:
Id:int
firstname:char
Here in the entry box,we have to specify the same value for id column which will restrict as after first entry other than writing lock bla bla due to primary key constraint thus having only one row forever.
Hope this helps!

Resources