I have written this stored procedure and it executes but it doesn't update the customer. The question is: Create a procedure named prc_cus_balance_update that will take the invoice number as a parameter and update the customer balance.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE PRC_CUS_BALANCE_UPDATE3
#INV_NUMBER INT
AS
BEGIN
DECLARE #CUS_CODE INT
SELECT #CUS_CODE=CUS_CODE
FROM INVOICE
WHERE #INV_NUMBER=INV_NUMBER
UPDATE CUSTOMER
SET CUS_BALANCE=CUS_BALANCE +
(SELECT INV_TOTAL FROM INVOICE WHERE #INV_NUMBER=INV_NUMBER)
WHERE #CUS_CODE=CUS_CODE
END
GO
While developing, I'd put in some "extras" to figure out what is going on.
Pseudo code below.
You want to make sure you found a matching row.
And you want to make sure at least one row was actually updated.
I'm NOT saying the code below is "production ready". But will show the concepts.
CREATE PROCEDURE PRC_CUS_BALANCE_UPDATE3
#INV_NUMBER INT
AS
BEGIN
DECLARE #CUS_CODE INT
DECLARE #MYROWCOUNT INT
SELECT #CUS_CODE=CUS_CODE
FROM INVOICE
WHERE #INV_NUMBER=INV_NUMBER
if(Not(#CUS_CODE IS NULL))
BEGIN
SET NOCOUNT OFF
UPDATE CUSTOMER
SET CUS_BALANCE=CUS_BALANCE +
(SELECT INV_TOTAL FROM INVOICE WHERE #INV_NUMBER=INV_NUMBER)
WHERE #CUS_CODE=CUS_CODE
select #MYROWCOUNT = ##ROWCOUNT
if(#MYROWCOUNT <=0)
BEGIN
print 'No row updated. :<'
END
SET NOCOUNT OFF
END
ELSE
BEGIN
print "#CUS_CODE match not found."
END
END
GO
Try putting some better bulletproofing in. Multiple rows, null values, and the like, can all cause problems with how you currently have it. Here's a stab at it, without me knowing the specifics of your data model (it may not be appropriate to sum together the totals from invoices, I'm just saying you could have multiple rows and need to deal with that).
CREATE PROCEDURE PRC_CUS_BALANCE_UPDATE3
#INV_NUMBER INT
AS
BEGIN
DECLARE #CUS_CODE INT
SELECT
TOP 1 #CUS_CODE = CUS_CODE
FROM
INVOICE
WHERE
INV_NUMBER=#INV_NUMBER
IF #CUS_CODE IS NOT NULL
BEGIN
UPDATE
CUSTOMER
SET
CUS_BALANCE = ISNULL(CUS_BALANCE, 0.0) +
ISNULL(
(SELECT
SUM(INV_TOTAL)
FROM
INVOICE
WHERE
#INV_NUMBER = INV_NUMBER),
0.0)
WHERE
CUS_CODE = #CUS_CODE
END
END
GO
Related
There are 2 tables:
Wallets;
Transactions.
There is a stored procedure that handles (I think with ACID operation):
updating on Wallet table
inserting one row into Transactions table
every time it is called.
The issue occurs when there are many calls to the SP at same time, infact the value of PreviousBalance is not correct (sequentially wrong), cause in the SP read old value, meantime another process of call is running.
To understand better look the following screenshot.
There are 3 Transaction with same DT (IDs 1289, 1288, 1287), in all of those PreviouseBalance is equal, but is not correct, because the value for :
Trx ID 1288 should be 180,78 as Balance of previous row;
Trx ID 1289 should be 168,07 = 180,78 - 12,08
I think that the issue is in the SET of #OLDBalance var; at same time those 3 thread read same value, so when the SP goes to INSERT loads same value of PreviousBalance.
How can I do in order to read #OLDBalance correct after commit of one operation?
I tried to set several type of Isolation Levet into SP, the result was the same and sometime went in error for deadlock.
I have the following stored procedure:
Stored Procedure
ALTER PROCEDURE [dbo].[upsMovimenta_internal]
#AccountID int,
#Amount money,
#TypeTransactionID int,
#ProductID int,
#notes nvarchar(max)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #OLDBalance MONEY;
DECLARE #PreviousBalance MONEY;
DECLARE #CurrentBalance MONEY;
DECLARE #Molt float;
BEGIN TRANSACTION;
IF NOT EXISTS( SELECT * FROM Accounts WHERE AccountID = #AccountID)
BEGIN
RaisError(N'Account not found ', 10, 1, N'number', 3);
return (1)
END
SELECT #Molt = Moltiplicatore
FROM TypeTransactions
where TypeTransactionID = #TypeTransactionID
;
IF (#Molt is null )
BEGIN
RaisError(N'Error transaction', 10, 1, N'number', 3);
return (1)
END
SET #Amount = #Amount * #Molt;
--SELECT * FROM Wallets
SELECT TOP 1 #OLDBalance = TotalAmount
FROM Wallets
where AccountID = #AccountID
;
SET #CurrentBalance = #OLDBalance + #Amount;
IF (#ProductID = 1 )
BEGIN
UPDATE Wallets
SET TotalAmount+=#Amount,
Cash+=#Amount
FROM Wallets where AccountID = #AccountID
;
END
IF (#ProductID = 2 )
BEGIN
UPDATE Wallets
SET TotalAmount+=#Amount,
Fun+=#Amount
FROM Wallets where AccountID = #AccountID
;
END
INSERT INTO Transactions
( AccountID, ProductID, DT, TypeTransactionID, Amout, Balance, PreviousBalance, Notes )
VALUES
( #AccountID, #ProductID, GETDATE(), #TypeTransactionID, #Amount, #CurrentBalance, #OLDBalance, #notes)
;
COMMIT TRANSACTION;
return (0)
END
Thank you so much guys
Generally, one way of managing locks on records, is to apply a dummy update on the rows you want to work on, right after starting transaction.
In this case SQL Server guarantees that those rows will be locked and no other transactions can access the rows. So you can change your design to something like this:
begin tran
update myTable
set Field1 = Field1
where someKeyField = 212
-- do the same for other tables that you want to protect against change
-- at this moment all working rows will be locked, other procedure calls will be on hold
-- do your main operations here
commit tran
The issue with this will be the other proc calls will wait and this may degrade performance or even time-out if the traffic is high and your operation in this proc is lengthy
If you are working on high transaction environment, you need to change your design.
Update: Design Suggestion
I don't get why you have PreviousBalance and Balance in your transaction (it is against the design rules, however you can override rule in special case).
Probably you have that to speed up your calculations or make your queries simpler. But it is not good practice in OLTP database.
Rules say you keep the Amount column and calculate PreviousBalance and Balance somewhere else.
You should drop PreviousBalance but keep the Balance column, and every time you insert a transaction, you update (increase/decrease) the Balance column. Plus you need to initialize the Balance column at the first transaction.
This is what I can think of. If I knew your whole system, I would be able to have better ideas though.
Requirement
I want a way to generate a new unique number (Invoice Number) in a continuous sequence (no number should be left out when generating a new number)
Valid Example: 1, 2, 3, 4
Invalid Example: 1, 2, 4, 3 (not in a continuous sequence)
Current Schema
Here is my existing table schema of the table Test
Solution i came up with
After doing some research i came up with the below code which seems to be working as of now.
DECLARE #i as int=0
While(#i<=10000 * 10000)
BEGIN
Begin Transaction
Insert Into Test(UniqueNo,[Text])values((Select IsNull(MAX(UniqueNo),0)+1 from Test with (TABLOCK)),'a')
COMMIT
SET #i = #i + 1;
END
Testing
I tried running the code from 12 different SQL Query or 12 threads you can say and currently it generates new and unique value for each records even after inserting 162,921 rows
Main Question
Can the above code result into duplicate values?
I tried it by hit-and-trial method and it works perfectly BUT when i go in-depth of Transaction Locking the select statement generates a Shared Lock for the whole table that means it will allow concurrent transactions to access the same data, right?
That means that multiple transactions can generate duplicate values, right?
Then how come i am not able to see any duplicate values yet?
EDIT
As per david's comment
I cannot use identity field because in any case if i delete a record then it would be difficult for me to fill that number up.
this is the procedure can create your countinues check number
create procedure [dbo].[aa] #var int
as
declare #isexists int=0
declare #lastnum int=0
set #isexists=(select isnull((select [text] from t000test where [text]=#var),0))
if(isnull(#isexists,0)<=0)
begin
set #lastnum=(select isnull( max([text]),0) from t000test )
if(#var>#lastnum)
begin
insert into t000test(text) values (#var)
end
else
print 'your number dose not in rang'
end
else
print 'your number exists in the data'
GO
you can test this procedure like this :
exec dbo.aa 6 --or any number u like
and this loop create your number automaticly by range u like to
declare #yourrange int =50
declare #id int =0;
while #yourrange>0
begin
exec dbo.aa #id
set #id=#id+1;
set #yourrange=#yourrange-1;
end
I am using 50 but you can use more range or less
I'm having problems with a stored procedure that iterates over a table, it works fine with a few hundred rows however when the table is over the thousands it saturates the memory and crashes.
The procedure should iterate row by row and fill a column with a value which is calculated from another column in the row. I suspect it is the cursor that crashes the procedure and in other questions I've read to use a while loop but I'm no expert in sql and the examples I tried from those answers didn't work.
CREATE PROCEDURE [dbo].[GenerateNewHashes]
AS
BEGIN
SET NOCOUNT ON;
DECLARE #module BIGINT = 382449983
IF EXISTS(SELECT 1 FROM dbo.telephoneSource WHERE Hash IS NULL)
BEGIN
DECLARE hash_cursor CURSOR FOR
SELECT a.telephone, a.Hash
FROM dbo.telephoneSource AS a
OPEN hash_cursor
FETCH FROM hash_cursor
WHILE ##FETCH_STATUS = 0
BEGIN
UPDATE dbo.telephoneSource
SET Hash = CAST(telephone AS BIGINT) % #module
WHERE CURRENT OF hash_cursor
FETCH NEXT FROM hash_cursor
END
CLOSE hash_cursor
DEALLOCATE hash_cursor
END
END
Basically the stored procedure is intended to fill a new column called Hash that was added to the existing table, when the script that updates the table ends the new column is filled with NULL values and then this stored procedure is supposed to fill each null value with the operation telephone number (which is a bigint) % module variable (big int as well).
Is there anything besides changing to a while loop that I can do to make it use less memory or just don't crash? Thanks in advance.
You could do the following:
WHILE 1=1
BEGIN
UPDATE TOP (10000) dbo.telephoneSource
SET Hash = CAST(telephone AS BIGINT)%#module
WHERE Hash IS NULL;
IF ##ROWCOUNT = 0
BEGIN
BREAK;
END;
END;
This will update Hash as long as there are NULL values and will exit once there have been no records updated.
Adding a filtered index could be useful as well:
CREATE NONCLUSTERED INDEX IX_telephoneSource_Hash_telephone
ON dbo.telephoneSource (Hash)
INCLUDE (telephone)
WHERE Hash IS NULL;
It will speed up lookups in order to update it. But this might be not needed.
Here is example of code to do it in loops from my comment above with out using a cursor, and if you add where your field you are updating IS NOT NULL into the inner loop it wont update ones that were already done (in case you need to restart the process or something.
I didnt include your specific tables in there but if you need me to I can add it in there.
DECLARE #PerBatchCount as int
DECLARE #MAXID as bigint
DECLARE #WorkingOnID as bigint
Set #PerBatchCount = 1000
--Find range of IDs to process using yoru tablename
SELECT #WorkingOnID = MIN(ID), #MAXID = MAX(ID)
FROM YouTableHere WITH (NOLOCK)
WHILE #WorkingOnID <= #MAXID
BEGIN
-- do an update on all the ones that exist in the offer table NOW
--DO YOUR UPDATE HERE
-- include this where clause where ID is your PK you are looping through
WHERE ID BETWEEN #WorkingOnID AND (#WorkingOnID + #PerBatchCount -1)
set #WorkingOnID = #WorkingOnID + #PerBatchCount
END
SET NOCOUNT OFF;
I would simply add computed column:
ALTER TABLE dbo.telephoneSource
ADD Hash AS (CAST(telephone AS BIGINT)%382449983) PERSISTED;
This is a Continuation of my previous question
sql update for dynamic row number
This time I am having an updated requirement.
I am having 2 tables
CraftTypes & EmployeeCraftTypes.
I need to update multiple rows in the CraftType Table and
I was able to update it as per the answer provided by TheGameiswar
Now there is a modification in the requirement.
In the table CraftTypes, there is a foreign key reference for the column CraftTypeKey with the table EmployeeCraftsTypes.
If there exist an entry for CraftTypeKey in the EmployeeCrafttypes table, then the row should not be updated.
Also the CraftTypeKey's whose row values are not updated must be obtained for returning the FK_restriction status of the rows.
This is the sql query I am using.
CREATE TYPE [DBO].[DEPARTMENTTABLETYPE] AS TABLE
( DepartmentTypeKey SMALLINT, DepartmentTypeName VARCHAR(50),DepartmentTypeCode VARCHAR(10) , DepartmentTypeDescription VARCHAR(128) )
ALTER PROCEDURE [dbo].[usp_UpdateDepartmentType]
#DEPARTMENTDETAILS [DBO].[DEPARTMENTTABLETYPE] READONLY
AS
BEGIN
SET NOCOUNT ON;
DECLARE #rowcount1 INT
BEGIN
BEGIN TRY
BEGIN TRANSACTION
UPDATE D1
SET
D1.[DepartmentTypeName]=D2.DepartmentTypeName
,D1.[DepartmentTypeCode]=D2.DepartmentTypeCode
,D1.[DepartmentTypeDescription]=D2.DepartmentTypeDescription
FROM
[dbo].[DepartmentTypes] D1
INNER JOIN
#DEPARTMENTDETAILS D2
ON
D1.DepartmentTypeKey=D2.DepartmentTypeKey
WHERE
D2.[DepartmentTypeKey] not in (select 1 from [dbo].[EmployeeDepartment] where [DepartmentTypeKey]=D2.DepartmentTypeKey)
SET #ROWCOUNT1=##ROWCOUNT
COMMIT
END TRY
BEGIN CATCH
SET #ROWCOUNT1=0
ROLLBACK TRAN
END CATCH
IF #rowcount1 =0
SELECT -174;
ELSE
SELECT 100;
END
END
Please Help
And Thanks in Advance
Ok
I think I figured out a way for it this time. I am not sure this is the right way, but its enough for me to meet the requirements.
I selected the distinct rows with FK reference from EmployeeCraftsTypes table as a second select query.
Now I can get the Row values which are not getting updated due to FK constraint.
This is the sql query I have used
ALTER PROCEDURE [dbo].[usp_UpdateCraftType]
#CRAFTDETAILS [DBO].[CRAFTTABLETYPE] READONLY
AS
BEGIN
SET NOCOUNT ON;
DECLARE #STATUSKEY TINYINT = (SELECT DBO.GETSTATUSKEY('ACTIVE'))
DECLARE #ROWCOUNT1 INT
BEGIN
BEGIN TRY
BEGIN TRANSACTION
UPDATE C1
SET
[C1].[CraftTypeName]=C2.CRAFTTYPENAME
,[C1].[CRAFTTYPEDESCRIPTION]=C2.CRAFTTYPEDESCRIPTION
,[C1].[StatusKey]=C2.[StatusKey]
FROM
[dbo].[CRAFTTYPES] C1
INNER JOIN
#CRAFTDETAILS C2
ON
C1.CRAFTTYPEKEY=C2.CRAFTTYPEKEY
WHERE
C2.[CRAFTTYPEKEY] NOT IN (SELECT EC.[CRAFTTYPEKEY] from [dbo].[EmployeeCrafts] EC where EC.[CRAFTTYPEKEY]=C2.[CRAFTTYPEKEY])
SET #ROWCOUNT1=##ROWCOUNT
COMMIT
END TRY
BEGIN CATCH
SET #ROWCOUNT1=0
ROLLBACK TRAN
END CATCH
--SET #ROWCOUNT1 = ##ROWCOUNT
IF #ROWCOUNT1 =0
SELECT -172;
ELSE
BEGIN
SELECT 100;
SELECT DISTINCT EC.[CRAFTTYPEKEY],'Value Already Assigned' as Reason
FROM [DBO].[EmployeeCrafts] EC
JOIN #CRAFTDETAILS C3
on C3.[CRAFTTYPEKEY]=EC.[CRAFTTYPEKEY]
WHERE EC.[CRAFTTYPEKEY]=C3.[CRAFTTYPEKEY]
END
END
END
Now in the Web API side I can check if there is any update failure by checking the rowcount for the second table.
If the row count is more than 0, then update error message can be generated
Hope it will be helpful to someone ....
Say I have a stored proc that returns a number of rows but I want to limit the amount of rows using TOP or something else. Is it possible to do this dynamically without creating another stored proc to update an existing stored proc to do this?
so my sp could look like this:
create procedure [dbo].[myproc]
#param1 int
as
begin
select sumthing
from mytable
where mycolumn=2
end
How can I add another param to this sp and make it optional to restrict the number of rows when I need this?
Like this:
create procedure [dbo].[myproc]
#param1 int,
#optionalRowcount int = 999999999999
as
begin
select TOP(#optionalRowcount) sumthing
from mytable
where mycolumn=2
end
The rowcount option is probably the simplest way to solve this for a SELECT scenario, but I did just notice on the MSDN page that rowcount may be deprecated in a future release for DELETE, INSERT, and UPDATE. So I'll post this alternative here in case it's useful for someone:
create procedure [dbo].[myproc]
#param1 int = null
as
begin
if #param1 is null
select * from myTable
else
select top (#param1) *
from myTable
end
You can use the set rowcount option. If #MaxRowCount = 0, all rows will be returned. Otherwise, the number of rows will be limited to the value in #MaxRowCount.
create procedure [dbo].[myproc]
#param1 int, #MaxRowCount Int
as
begin
Set rowcount #MaxRowCount
select sumthing
from mytable
where mycolumn=2
set rowcount 0
end