Verify SUM of table records on INSERT - sql-server

I have a table that keeps track of transactions for various accounts:
AccountTransactions
AccountTransactionID int NOT NULL (PK)
AccountID int NOT NULL (FK)
Amount decimal NOT NULL
Upon inserting a record with a negative amount into this table, I need to verify that the SUM of the amount column for the specified account is greater than zero. If the new record will cause this SUM to fall below zero, the record should not be inserted.
For example, if I have the following records, inserting an amount of -8.00 for AccountID 5 should not be allowed:
AccountTransactionID AccountID Amount
---------------------------------------------
1 5 10.00
2 6 15.00
3 5 -3.00
What is the best method to accomplish this? Check constraint, trigger, or just check for this condition in a stored procedure?

You can do a simple check:
DECLARE #TheSum decimal(18,2)
SET #TheSum = (SELECT SUM(MyCol) FROM MyTable WHERE AccountID = #SomeParameter)
If #TheSum > 0
BEGIN
--do your insert
END
...

You could add a where clause to your insert:
insert YourTable
(AccountID, Amount)
select #AccountID, #Amount
where 0 <=
(
select #Amount + sum(Amount)
from YourTable
where AccountID = #AccountID
)

Related

Alternative to LIST aggregate function in Firebird

I have a a table Orders:
select * Orders order by 2
Ordernumber Zone
12345 1
12345 2
12345 3
What sql would produce the following output?
Ordernumber Zone
12345 123
Unfortunately, LIST cannot be used, since this is an old Firebird 1.5xxx.
Are there any other possibilities?
CREATE PROCEDURE GET_ZONER(ordernumber Integer)
returns (zoner varchar(20))
AS
declare variable zone varchar(20) ;
Begin
zoner = '';
for
select distinct zone from orders
where ordernumber = :ordernumber order by zone into :zone do
begin
zoner = zoner || :zone;
end
SUSPEND;
End
and then use
select * from getzoner(1234)

Unique values in two columns

I have two columns and I need them both to be unique between each other (like it is 1 column).
My first attempt was to create sequence and set default constraints.
create sequence seq1
as bigint
start with 1
increment by 1
cache;
go
create table product (
pk uniqueidentifier
, id_1 bigint not null default (next value for seq1)
, id_2 bigint not null default (next value for seq1)
);
go
insert into product (pk) values (newid());
insert into product (pk) values (newid());
insert into product (pk) values (newid());
insert into product (pk) values (newid());
insert into product (pk) values (newid());
go
And it doesn't work. The result of query above will be:
id_1 id_2
1 1
2 2
3 3
4 4
5 5
Currently I stopped using 2 sequences with even and not even numbers.
create sequence seq2
as bigint
start with 1
increment by 2
cache;
go
create sequence seq3
as bigint
start with 2
increment by 2
cache;
go
But if in future I will need to add another column which also must be unique I will have a problem.
I also thinked about stored procedures. Something like this works for me.
create procedure sp_insertProduct
as
begin
declare #id1 as bigint = next value for seq1;
declare #id2 as bigint = next value for seq1;
insert into product (pk, id_1, id_2) values (newid(), #id1, #id2);
end
go
exec sp_insertProduct;
exec sp_insertProduct;
exec sp_insertProduct;
go
But due to my ORM framework restictions I cannot use stored procedures for inserting.
So is there a better solution for that problem?
PS. for some reasons I can not use uniqueidentifiers.
UPDATE
I think I need to explain question a bit clearly. I have a working solution for now (and both current answers will also work), but I wonder if there is a extensible solution to:
provide uniqueness of values in multiple columns (with ability to
add additional columns in future).
avoid using uniqueidentifiers
for better understanding of question, this is how I check uniqueness:
with src as (
select id_1 as id from product
union all
select id_2 as id from product
)
select id, count(*) as equal_values
from src
group by id
having (count(*) > 1)
create sequence seq1
as bigint
start with 1
increment by 2
cache;
go
create table product (
pk uniqueidentifier
, id_1 bigint not null default (next value for seq1)
, id_2 bigint not null default (next value for seq1 + 1)
);
go
doing this..
insert into product (pk) values (newid());
insert into product (pk) values (newid());
insert into product (pk) values (newid());
insert into product (pk) values (newid());
insert into product (pk) values (newid());
this would generate..
pk id_1 id_2
2A159914-8105-4DC1-9D7E-570CC5444172 1 2
6DAFEF16-2B81-4A10-99EF-B3F1A74389C6 3 4
8C6F6697-D993-4320-92BB-04CD56804C5A 5 6
AC97F37F-CAC3-4E83-BDD4-4B55D009C334 7 8
3DDAADA0-D7DB-4350-8087-ABF02B539552 9 10
SQL Fiddle
May be this seems very naive but it should work. I didn't check but you may need to add some parenthesis:
create table product (
pk uniqueidentifier
, id_1 bigint not null default (next value for seq1)
, id_2 bigint not null default (-next value for seq1)
);
go

Setting variable inside table update statement

TempTable has columns RunningTotal and ClientCount, we also have #RunningTotal variable declared and set to 0.
Can someone please explain what does this line do ?
UPDATE Temptable
SET #RunningTotal = RunningTotal = #RunningTotal + ClientCount
Never seen this construct before, but it seems to work like this.
It fills column RunningTotal with a cumulative total of ClientCount.
Say we start with a table with just ClientCount filled in:
CREATE TABLE dbo.Temptable (ClientCount int, RunningTotal int)
INSERT INTO Temptable (ClientCount) VALUES (5), (4), (6), (2)
SELECT * FROM Temptable
ClientCount RunningTotal
----------- ------------
5 NULL
4 NULL
6 NULL
2 NULL
And then run the update statement:
DECLARE #RunningTotal int = 0
UPDATE Temptable SET #RunningTotal = RunningTotal = #RunningTotal + ClientCount
SELECT * FROM Temptable
ClientCount RunningTotal
----------- ------------
5 5
4 9
6 15
2 17
As you can see, each value of RunningTotal is the sum of all ClientCount values from the current and any preceding records.
The downside is, you have no control in which order the records are processed. Which makes me wonder whether this is a recommended approach in a production environment.
Please check here for a deeper discussion:
Calculate a Running Total in SQL Server

Checking next row in table is incremented by 1 minute in datetime column

I need to check alot of data in a Table to make sure my feed has not skipped anything.
Basically the table has the following columns
ID Datetime Price
The data in DateTime column is incremented by 1 minute in each successive row. I need to check the next row of the current one to see if is 1 minute above the one being queries in that specific context.
The query will probably need some sort of loop, then grab a copy of the next row and compare it to the datetime row of the current to make sure it is incremented by 1 minute.
I created a test-table to match your description, and inserted 100 rows with 1 minute between each row like this:
CREATE TABLE [Test] ([Id] int IDENTITY(1,1), [Date] datetime, [Price] int);
WITH [Tally] AS (
SELECT GETDATE() AS [Date]
UNION ALL
SELECT DATEADD(minute, -1, [Date]) FROM [Tally] WHERE [Date] > DATEADD(minute, -99, GETDATE())
)
INSERT INTO [Test] ([Date], [Price])
SELECT [Date], 123 AS [Price]
FROM [Tally]
Then i deleted a record in the middle to simulate a missing minute:
DELETE FROM [Test]
WHERE Id = 50
Now we can use this query to find missing records:
SELECT
a.*
,CASE WHEN b.[Id] IS NULL THEN 'Next record is Missing!' ELSE CAST(b.[Id] as varchar) END AS NextId
FROM
[Test] AS a
LEFT JOIN [Test] AS b ON a.[Date] = DATEADD(minute,1,b.[Date])
WHERE
b.[Id] IS NULL
The resullt will look like this:
Id Date Price NextId
----------- ----------------------- ----------- ------------------------------
49 2013-05-11 22:42:56.440 123 Next record is Missing!
100 2013-05-11 21:51:56.440 123 Next record is Missing!
(2 row(s) affected)
The key solution to the problem is to join the table with itself, but use datediff to find the record that is supposed to be found on the next minute. The last record of the table will of course report that the next row is missing, since it hasn't been inserted yet.
Borrowing TheQ's sample data you can use
WITH T
AS (SELECT *,
DATEDIFF(MINUTE, '20000101', [Date]) -
DENSE_RANK() OVER (ORDER BY [Date]) AS G
FROM Test)
SELECT MIN([Date]) AS StartIsland,
MAX([Date]) AS EndIsland
FROM T
GROUP BY G

Auto running number ID with format xxxx/year number (9999/12) in SQL Server stored procedure

I have one table (Stock_ID, Stock_Name). I want to write a stored procedure in SQL Server with Stock_ID running number with a format like xxxx/12 (xxxx = number start from 0001 to 9999; 12 is the last 2 digits of current year).
My scenario is that if the year change, the running number will be reset to 0001/13.
what do you intend to do when you hit more than 9999 in a single year??? it may sound impossible, but I've had to deal with so many "it will never happen" data related design mess-ups over the years from code first design later developers. These are major pains depending on how may places you need to fix these items which are usually primary key and foreign keys used all over.
This looks like a system requirement to SHOW the data this way, but it is the developers responsibility to design the internals of the application. The way you store it and display it don't need to be identical. I'd split that into two columns, using an int for the number portion and a tiny int for the 2 digit year portion. You can use a computed column for quick and easy display (persist it and index if necessary), where you pad with leading zeros and add the slash. Throw in a check constraint on the year portion to make sure it stays within a reasonable range. You can make the number portion an identity and just have a job reseed it back to 1 every new years eve.
try it out:
--drop table YourTable
--create the basic table
CREATE TABLE YourTable
(YourNumber int identity(1,1) not null
,YourYear tinyint not null
,YourData varchar(10)
,CHECK (YourYear>=12 and YourYear<=25) --optional check constraint
)
--add the persisted computed column
ALTER TABLE YourTable ADD YourFormattedNumber AS ISNULL(RIGHT('0000'+CONVERT(varchar(10),YourNumber),4)+'/'+RIGHT(CONVERT(varchar(10),YourYear),2),'/') PERSISTED
--make the persisted computed column the primary key
ALTER TABLE YourTable ADD CONSTRAINT PK_YourTable PRIMARY KEY CLUSTERED (YourFormattedNumber)
sample data:
--insert rows in 2012
insert into YourTable values (12,'aaaa')
insert into YourTable values (12,'bbbb')
insert into YourTable values (12,'cccc')
--new years eve job run this
DBCC CHECKIDENT (YourTable, RESEED, 0)
--insert rows in 2013
insert into YourTable values (13,'aaaa')
insert into YourTable values (13,'bbbb')
select * from YourTable order by YourYear,YourNumber
OUTPUT:
YourNumber YourYear YourData YourFormattedNumber
----------- -------- ---------- -------------------
1 12 aaaa 0001/12
2 12 bbbb 0002/12
3 12 cccc 0003/12
1 13 aaaa 0001/13
2 13 bbbb 0002/13
(5 row(s) affected)
to handle the possibility of more than 9999 rows per year try a different computed column calculation:
CREATE TABLE YourTable
(YourNumber int identity(9998,1) not null --<<<notice the identity starting point, so it hits 9999 quicker for this simple test
,YourYear tinyint not null
,YourData varchar(10)
)
--handles more than 9999 values per year
ALTER TABLE YourTable ADD YourFormattedNumber AS ISNULL(RIGHT(REPLICATE('0',CASE WHEN LEN(CONVERT(varchar(10),YourNumber))<4 THEN 4 ELSE 1 END)+CONVERT(varchar(10),YourNumber),CASE WHEN LEN(CONVERT(varchar(10),YourNumber))<4 THEN 4 ELSE LEN(CONVERT(varchar(10),YourNumber)) END)+'/'+RIGHT(CONVERT(varchar(10),YourYear),2),'/') PERSISTED
ALTER TABLE YourTable ADD CONSTRAINT PK_YourTable PRIMARY KEY CLUSTERED (YourFormattedNumber)
sample data:
insert into YourTable values (12,'aaaa')
insert into YourTable values (12,'bbbb')
insert into YourTable values (12,'cccc')
DBCC CHECKIDENT (YourTable, RESEED, 0) --new years eve job run this
insert into YourTable values (13,'aaaa')
insert into YourTable values (13,'bbbb')
select * from YourTable order by YourYear,YourNumber
OUTPUT:
YourNumber YourYear YourData YourFormattedNumber
----------- -------- ---------- --------------------
9998 12 aaaa 9998/12
9999 12 bbbb 9999/12
10000 12 cccc 10000/12
1 13 aaaa 0001/13
2 13 bbbb 0002/13
(5 row(s) affected)
This might help:
DECLARE #tbl TABLE(Stock_ID INT,Stock_Name VARCHAR(100))
INSERT INTO #tbl
SELECT 1,'Test'
UNION ALL
SELECT 2,'Test2'
DECLARE #ShortDate VARCHAR(2)=RIGHT(CAST(YEAR(GETDATE()) AS VARCHAR(4)),2)
;WITH CTE AS
(
SELECT
CAST(ROW_NUMBER() OVER(ORDER BY tbl.Stock_ID) AS VARCHAR(4)) AS RowNbr,
tbl.Stock_ID,
tbl.Stock_Name
FROM
#tbl AS tbl
)
SELECT
REPLICATE('0', 4-LEN(RowNbr))+CTE.RowNbr+'/'+#ShortDate AS YourColumn,
CTE.Stock_ID,
CTE.Stock_Name
FROM
CTE
From memory, this is a way to get the next id:
declare #maxid int
select #maxid = 0
-- if it does not have #maxid will be 0, if it was it will give the next id
select #maxid = max(convert(int, substring(Stock_Id, 1, 4))) + 1
from table
where substring(Stock_Id, 6, 2) = substring(YEAR(getdate()), 3, 2)
declare #nextid varchar(7)
select #nextid = right('0000'+ convert(varchar,#maxid),4)) + '/' + substring(YEAR(getdate()), 3, 2)

Resources