I've got the following tables (only key columns shown):
Order OrderItem OrderItemDoc Document
======= =========== ============ ==========
OrderId OrderItemId OrderItemId DocumentId
--etc-- OrderId DocumentId --etc--
--etc--
I'm writing a stored procedure to 'clone' an Order (takes an existing OrderId as a parameter, copies the Order and all related items, then returns the new OrderId). I'm stuck on the 'OrderItemDoc' joining table as it will be joining two sets of newly created records. I'm thinking I'll need to loop round a temporary table that maps the old IDs to the new ones. Is that the right direction to go in? It's running on MS-SQL 2000.
There are many efficient ways of doing this SQL 2005 and 2008. Here's a way to do it using SQL2000.
You need to declare a variable to hold the cloned OrderId and create a temp table to hold the cloned records that will go in the OrderItemDoc table.
Here's some sample code on how to that. It relies on the sequence to link the old OrderItems to the new ones in OrderItemDoc Table.
CREATE PROCEDURE CloneOrder
(
#OrderId int
)
AS
DECLARE #NewOrderId int
--create the cloned order
INSERT Order(...OrderColumnList...)
SELECT ...OrderColumnList... FROM ORDER WHERE OrderId = #OrderId;
-- Get the new OrderId
SET #NewOrderId = SCOPE_IDENTITY();
-- create the cloned OrderItems
INSERT OrderItem(OrderId,...OrderItemColumns...)
SELECT #NewOrderId, ...OrderItemColumns...
FROM OrderItem WHERE OrderId = #OrderId
-- Now for the tricky part
-- Create a temp table to hold the OrderItemIds and DocumentIds
CREATE TABLE #TempOrderItemDocs
(
OrderItemId int,
DocumentId int
)
-- Insert the DocumentIds associated with the original Order
INSERT #OrderItemDocs(DocumentId)
SELECT
od.DocumentId
FROM
OrderItemDoc od
JOIN OrderItem oi ON oi.OrderItemId = od.OrderItemId
WHERE
oi.OrderId = #OrderId
ORDER BY
oi.OrderItemId
-- Update the temp table to contain the newly cloned OrderItems
UPDATE #OrderItemDocs
SET
OrderItemId = oi.OrderItemId
FROM
OrderItem oi
WHERE
oi.OrderId = #NewOrderId
ORDER BY
oi.OrderItemId
-- Now to complete the Cloning process
INSERT OrderItemDoc(OrderItemId, DocumentId)
SELECT
OrderItemId, DocumentId
FROM
#TempOrderItemDocs
Yes, a memory table or a temp table would be your best options. If your PK's are identity columns then you could also make assumptions about ID's being contiguous based on an offset (ie, you could assume that your new OrderItemId is equal to the existing Max(OrderItemId) in the table + the relative offset of the Item in the Order, but I don't like making assumptions like that and it becomes a pain going more than one level deep).
drats, I wrote up this then saw you were on 2000... (sql server 2005 doesn't have the trick that this uses...)
no loop necessary in sql 2005..
INSERT INTO Order ----assuming OrderID is an identity
VALUES ( .....)
SELECT
.....
FROM Order
WHERE OrderId=#OrderId
DECLARE #y TABLE (RowID int identity(1,1) primary key not null, OldID int, NewID int)
INSERT INTO OrderItem ---assuming OrderItemId is an identity
VALUES (OrderId ......)
OUTPUT OrderItems.OrderItemId, INSERTED.tableID
INTO #y
SELECT
OrderId .....
FROM OrderItems
WHERE OrderId=#OrderId
INSERT INTO OrderItemDoc
VALUES (OrderItemId ....) ---assuming DocumentId is an identity
SELECT
y.NewID .....
FROM OrderItem
INNER JOIN #Y y ON OrderItem.OrderItemId=y.OldId
do document the same way, make a new #temp table, etc...
Related
Using Azure SQL Server database. I have a few tables partitioned on a 90 day date boundary. We have a stored procedure to shift data to maintain the proper partition breakpoint/range. I'm using a small function to provide the proper date breakpoint for my queries so I don't have to constantly update all my views.
But just by virtue of using that function in my queries, partitioning is ignored. Do I have no choice but to put hard-coded values in my queries everywhere and constantly modify them?
Here is a sample that reproduces the problem.
Update: After changing the PartitionDate function below according to the marked answer, it was fine for a short time (partition elimination occurred). Then, queries started sucking again. When I ran simple queries filtered by the date function, partitions were no longer eliminated.
------------------------------- setup
-- Create functions PartitionDate and PartitionQueryDate
create function PartitionDate() returns date as
begin
return GETDATE() - 91 -- returns 1/4/2019 today
end
go
create function PartitionQueryDate() returns date as
begin
return GETDATE() - 90 -- returns 1/5/2019
end
go
-- Create partition func and scheme using above functions
CREATE PARTITION FUNCTION order_pf (smalldatetime) AS RANGE RIGHT FOR VALUES (dbo.PartitionDate())
CREATE PARTITION SCHEME order_ps AS PARTITION order_pf ALL TO ([PRIMARY])
-- Create Order (pk, OrderDate, Fk), Customer (pk) tables. Order is partitioned
create table Customer
(
id int primary key identity(1,1),
FirstName varchar(255) not null
)
create table [Order]
(
id int identity(1,1), OrderDate smalldatetime not null,
CustomerId int not null,
CONSTRAINT [FK_Orders_Customer] FOREIGN KEY ([CustomerId]) REFERENCES Customer([id])
) on order_ps(OrderDate);
-- Add in indexes to Order: only OrderDate on the partition func
CREATE CLUSTERED INDEX [Order_OrderDate] ON [Order]([OrderDate] ASC) ON [order_ps] ([OrderDate]);
CREATE NONCLUSTERED INDEX [FK_Order_Customer] ON [Order](CustomerId, OrderDate) ON [order_ps] ([OrderDate]) -- seems to work the same with or without the partition reference.
go
-- Add some data before and after the partition break
insert Customer values ('bob')
insert [Order] values('12-31-2018', SCOPE_IDENTITY())
insert Customer values ('hank')
insert [Order] values('1-6-2019', SCOPE_IDENTITY())
---------------------------- test
-- verify a row per partition:
SELECT $PARTITION.order_pf(OrderDate) as Partition_Number, COUNT(*) as Row_Count
FROM [Order]
GROUP BY $PARTITION.order_pf(OrderDate)
-- Simple queries with actual execution plan turned on. The queries are logically equivalent.
select COUNT(1) from [Order] where OrderDate > '1-5-2019' -- Index seek Order_OrderDate; actual partition count 1
select COUNT(1) from [Order] where OrderDate > dbo.PartitionQueryDate() -- Index seek Order_OrderDate; actual partition count 2
-- Cleanup
drop table if exists [Order]
drop table if exists Customer
drop partition scheme order_ps
drop partition function order_pf
drop function if exists PartitionDate
drop function if exists PartitionQueryDate
One workaround would be to assign the function result to a variable first.
declare #pqd smalldatetime = dbo.PartitionQueryDate();
select COUNT(1) from [Order] where OrderDate > #pqd
Another option would be to use an inline TVF
CREATE FUNCTION dbo.PartitionQueryDateTVF ()
RETURNS TABLE
AS
RETURN
(
SELECT CAST(CAST( GETDATE() - 90 AS DATE) AS SMALLDATETIME) AS Date
)
GO
SELECT COUNT(1) from [Order] where OrderDate > (SELECT Date FROM dbo.PartitionQueryDateTVF())
This may be something that is improved with inline scalar UDFs but I'm not in a position to test this at the moment
How can filtered statistics created on one table be used for cardinality estimation of another table in a TSQL query?
The code sample that i pasted below has comes from http://www.sqlpassion.at/archive/2013/10/29/fixing-cardinality-estimation-errors-with-filtered-statistics/?awt_l=OGZ5i&awt_m=3sMpQpfwa9YUUTS
-- Create a new database
CREATE DATABASE FilteredStatistics
GO
-- Use it
USE FilteredStatistics
GO
-- Create a new table
CREATE TABLE Country
(
ID INT PRIMARY KEY,
Name VARCHAR(100)
)
GO
-- Create a new table
CREATE TABLE Orders
(
ID INT,
SalesAmount DECIMAL(18, 2)
)
GO
-- Create a Non-Clustered Index
CREATE NONCLUSTERED INDEX idx_Name ON Country(Name)
GO
-- Create a Clustered Index
CREATE CLUSTERED INDEX idx_ID_SalesAmount ON Orders(ID, SalesAmount)
GO
-- Insert a few records into the Lookup Table
INSERT INTO Country VALUES(0, 'Austria')
INSERT INTO Country VALUES(1, 'UK')
INSERT INTO Country VALUES(2, 'France')
GO
-- Insert uneven distributed order data
INSERT INTO Orders VALUES(0, 0)
DECLARE #i INT = 1
WHILE #i <= 1000
BEGIN
INSERT INTO Orders VALUES (1, #i)
SET #i += 1
END
GO
-- Update the Statistics on both tables
UPDATE STATISTICS Country WITH FULLSCAN
UPDATE STATISTICS Orders WITH FULLSCAN
GO
--incorrect row estimate for Orders table
SELECT SalesAmount FROM Country
INNER JOIN Orders ON Country.ID = Orders.ID
WHERE Name = 'UK'
--create filtered statistics on table country
CREATE STATISTICS Country_UK ON Country(ID)
WHERE Name = 'UK'
--correct row estimate for Orders table
SELECT SalesAmount FROM Country
INNER JOIN Orders ON Country.ID = Orders.ID
WHERE Name = 'UK'
Before filtered statistics were created, you note that the estimated number of rows returned from Orders table is 500.5 which is an estimate derived from the density vector(1001 * .5) on the orders table.
Here is the density vector and histogram for Orders table
After filtered stats were created on table Country, the estimate of rows returned from Order table is correct. How are filtered stats on table Country helping with cardinality estimation in table Orders?
In my head this sounds improbable, but I'd like to know if I can do it:
INSERT INTO MyTable (Name)
VALUES ('First'),
('Second'),
('Third'),
('Fourth'),
('Fifth');
SELECT INSERTED Name, ID FROM TheAboveQuery
Where ID is an auto-indexed column?
Just to clarify, I want to select ONLY the newly inserted rows.
Starting with SQL Server 2008 you can use OUTPUT clause with INSERT statement
DECLARE #T TABLE (ID INT, Name NVARCHAR(100))
INSERT INTO MyTable (Name)
OUTPUT INSERTED.ID, INSERTED.Name INTO #T
VALUES
('First'),
('Second'),
('Third'),
('Fourth'),
('Fifth');
SELECT Name, ID FROM #T;
UPDATE: if table have no triggers
INSERT INTO MyTable (Name)
OUTPUT INSERTED.ID, INSERTED.Name
VALUES
('First'),
('Second'),
('Third'),
('Fourth'),
('Fifth');
Sure, you can use an IDENTITY property on your ID field, and create the CLUSTERED INDEX on it
ONLINE DEMO
create table MyTable ( ID int identity(1,1),
[Name] varchar(64),
constraint [PK_MyTable] primary key clustered (ID asc) on [Primary]
)
--suppose this data already existed...
INSERT INTO MyTable (Name)
VALUES
('First'),
('Second'),
('Third'),
('Fourth'),
('Fifth');
--now we insert some more... and then only return these rows
INSERT INTO MyTable (Name)
VALUES
('Sixth'),
('Seventh')
select top (##ROWCOUNT)
ID,
Name
from MyTable
order by ID desc
##ROWCOUNT returns the number of rows affected by the last statement executed. You can always see this in the messages tab of SQL Server Management Studio. Thus, we are getting the number of rows inserted and combining it with TOP which limits the rows returned in a query to the specified number of rows (or percentage if you use [PERCENT]). It is important that you use ORDER BY when using TOP otherwise your results aren't guaranteed to be the same
From my previous edited answer...
If you are trying to see what values were inserted, then I assume you are inserting them a different way and this is usually handled with an OUTPUT clause, TRIGGER if you are trying to do something with these records after the insert, etc... more information would be needed.
I have 2 tables CombinableOrders and Orders and tempory table of order Ids
Orders contain a nullable FK to CombinableOrders
I create the record as follows
INSERT INTO
CombinableOrders ([Rank])
VALUES (0)
I then need to associate that new combinableOrder with a set of orders derived from a temporary table of ids
UPDATE Orders
SET Orders.CombinableOrder_Id = #Id_To_Original_Insert
FROM Orders AS Orders
INNER JOIN #Ids AS Ids
ON Orders.Id = Ids.Id
How would i get the Id from the new created CombinableOrders?
You probably just want to declare a variable of the right type
DECLARE #id INT
and then set it using SCOPE_IDENTITY() after the insert
SELECT #id = SCOPE_IDENTITY()
Alternatively you could output in the insert using the OUTPUT clause, but that's may be slightly more complicated in your case.
given this table definition
create table herb.app (appId int identity primary key
, application varchar(15) unique
, customerName varchar(35),LoanProtectionInsurance bit
, State varchar(3),Address varchar(50),LoanAmt money
,addedBy varchar(7) not null,AddedDt smalldatetime default getdate())
I believe changes will be minimal, usually only a single field, and very sparse.
So I created this table:
create table herb.appAudit(appAuditId int primary key
, field varchar(20), oldValue varchar(50),ChangedBy varchar(7) not null,AddedDt smalldatetime default getdate())
How in a trigger can I get the column name of the value of what was changed to store it? I know how to get the value by joining the deleted table.
Use the inserted and deleted tables. Nigel Rivett wrote a great generic audit trail trigger using these tables. It is fairly complex SQL code, but it highlights some pretty cool ways of pulling together the information and once you understand them you can create a custom solution using his ideas as inspiration, or you could just use his script.
Here are the important ideas about the tables:
On an insert, inserted holds the inserted values and deleted is empty.
On an update, inserted holds the new values and deleted holds the old values.
On a delete, deleted holds the deleted values and inserted is empty.
The structure of the inserted and deleted tables (if not empty) are identical to the target table.
You can determine the column names from system tables and iterate on them as illustrated in Nigel's code.
if exists (select * from inserted)
if exists (select * from deleted)
-- this is an update
...
else
-- this is an insert
...
else
-- this is a delete
...
-- For updates to a specific field
SELECT d.[MyField] AS OldValue, i.[MyField] AS NewValue, system_user AS User
FROM inserted i
INNER JOIN deleted d ON i.[MyPrimaryKeyField] = d.[MyPrimaryKeyField]
-- For your table
SELECT d.CustomerName AS OldValue, i.CustomerName AS NewValue, system_user AS User
FROM inserted i
INNER JOIN deleted d ON i.appId = d.appId
If you really need this kind of auditing in a way that's critical to your business look at SQL Server 2008's Change Data Capture feature. That feature alone could justify the cost of an upgrade.
something like this for each field you want to track
if UPDATE(Track_ID)
begin
insert into [log].DataChanges
(
dcColumnName,
dcID,
dcDataBefore,
dcDataAfter,
dcDateChanged,
dcUser,
dcTableName
)
select
'Track_ID',
d.Data_ID,
coalesce(d.Track_ID,-666),
coalesce(i.Track_ID,-666),
getdate(),
#user,
#table
from inserted i
join deleted d on i.Data_ID=d.Data_ID
and coalesce(d.Track_ID,-666)<>coalesce(i.Track_ID,-666)
end
'Track_ID' is the name of the field, and d.Data_ID is the primary key of the table your tracking. #user is the user making the changes, and #table would be the table your keeping track of changes in case you're tracking more than one table in the same log table
Here's my quick and dirty audit table solution. (from http://freachable.net/2010/09/29/QuickAndDirtySQLAuditTable.aspx)
CREATE TABLE audit(
[on] datetime not null default getutcdate(),
[by] varchar(255) not null default system_user+','+AppName(),
was xml null,
[is] xml null
)
CREATE TRIGGER mytable_audit ON mytable for insert, update, delete as
INSERT audit(was,[is]) values(
(select * from deleted as [mytable] for xml auto,type),
(select * from inserted as [mytable] for xml auto,type)
)