How do I effectively insert multiple rows without using loop for all of the Account-ID values?
INSERT INTO Table1
(AccountID, ShowColumns, GroupColumns, AvgColumnsFlag)
VALUES
(1, 'foo1', 'foo2', 'foo3')
(1, 'abc1', 'abc2', 'abc3')
(1, 'xyz1', 'xyz1', 'xyz1')
In this case, I have over 20,000 account ids. I can use one other table with unique account ID and do some kind of joining to get that. Then use it to in place of the displayed example Account-ID of "1".
I don't know how you guys do with multiple inserts for each Account-ID.
Thanks...
[Edit]
I found a way to insert using data from other table recently but unfortunately I can only insert 1 row, not multiple rows. :-( See code below... Is it possible to consolidate 3 of them into 1 instead?
INSERT INTO tblDealerSavedDataMyInventorySavedBuilds
(AccountId, LoadDefault, BuildName, ColumnShowAndSortOrderValues, ColumnGroupByValues, ColumnSortAverageValues)
SELECT DISTINCT tblaAccounts.AccountID, 0, 'My Inventory by Count', 'ImportStatus|StockNumber|Vin|Year|Make ASC|Model ASC|Trim|Mileage|PurchasePrice|StockDate|RepairCost|TotalCost|DaysInInventory|InventoryTrackerLocation|Category', 'Make|Model', 'MyInventoryCount-SortOrderByCount'
FROM tblaAccounts
ORDER BY tblaAccounts.AccountID ASC
INSERT INTO tblDealerSavedDataMyInventorySavedBuilds
(AccountId, LoadDefault, BuildName, ColumnShowAndSortOrderValues, ColumnGroupByValues, ColumnSortAverageValues)
SELECT DISTINCT tblaAccounts.AccountID, 0, 'My Inventory by Make', 'ImportStatus|StockNumber|Vin|Year|Make ASC|Model ASC|Trim|Mileage|PurchasePrice|StockDate|RepairCost|TotalCost|DaysInInventory|InventoryTrackerLocation|Category', 'Make|Model', 'MyInventoryCount-SortOrderByMake'
FROM tblaAccounts
ORDER BY tblaAccounts.AccountID ASC
INSERT INTO tblDealerSavedDataMyInventorySavedBuilds
(AccountId, LoadDefault, BuildName, ColumnShowAndSortOrderValues, ColumnGroupByValues, ColumnSortAverageValues)
SELECT DISTINCT tblaAccounts.AccountID, 0, 'My Inventory by Purchase Price', 'ImportStatus|StockNumber|Vin|Year|Make ASC|Model ASC|Trim|Mileage|PurchasePrice|StockDate|RepairCost|TotalCost|DaysInInventory|InventoryTrackerLocation|Category', 'Make|Model', 'MyInventoryCount-SortOrderByCost'
FROM tblaAccounts
ORDER BY tblaAccounts.AccountID ASC
First insert into #SourceTable all your values.
Then use this statement:
INSERT INTO Table1
SELECT *
FROM #SourceTable
It may look the same, but it's different, since you are actually addressing the table once, instead of 20,000 times..
You can also do it this way:
INSERT INTO Table1
SELECT 1, 'foo1', 'foo2', 'f003'
UNION ALL
SELECT 2, 'abc11', 'abc2', 'abc3'
UNION ALL
...
To insert multiple rows with hard-coded values use
insert into table (col1, col2, col3)
select 1, 'foo1', 'foo2', 'f003'
union all
select 2, 'abc11', 'abc2', 'abc3'
etc.
To insert from existing data
insert into table (col1, col2, col3)
select srccol1, srccol22, srccol33
from TableOrView
Related
Not sure how to achieve the result, need your help
Source A:
SELECT SourceAID
FROM [dbo].[SourceA]
Source B:
SELECT SourceBID
FROM [dbo].[SourceB]
Result table (select example):
SELECT SourceAID
,SourceBID
,Value
FROM [dbo].[Result]
Idea of insert: For each SourceAID, i need to insert records with all SourceBID. There is no any reference between these 2 tables.
Idea by hand looks like this:
INSERT INTO [dbo].[Result] ([SourceAID], [SourceBID], [Value])
VALUES ('AID_1', 'BID_1', NULL),
('AID_1', 'BID_2', NULL),
('AID_1', 'BID_3', NULL),
('AID_2', 'BID_1', NULL),
('AID_2', 'BID_2', NULL),
('AID_2', 'BID_3', NULL)
and so on
As #Larnu said.
Use some following code:
INSERT INTO [dbo].[Result] ([SourceAID], [SourceBID], [Value])
SELECT
SA.SourceAID,
SB.SourceBID,
NULL
FROM
[dbo].[SourceA] AS SA
CROSS JOIN [dbo].[SourceB] AS SB
The other way is using subquery
INSERT INTO [dbo].[Result] ([SourceAID], [SourceBID], [Value])
SELECT SA.SourceAID,SB.SourceBID,NULL
(SELECT 1 AS ID ,SA.SourceAID FROM [dbo].[SourceA]) SA
join
(SELECT 1 AS ID ,SA.SourceBID FROM [dbo].[SourceB]) SB
on SA.ID=SB.ID
I'm using SSMS 17.9.1 - I have a table with a ContractNo and RightCodes columns. See image attachment for sample data:
I need a SELECT statement that will return rows where RightsCodes LIKE '904' or '908', but where there are no other rows with the same ContractNo that have RightCodes LIKE '922' or '923' or '924'.
So in the example data I would expect Rows 1 and 8 to be returned.
There can be x number of rows that have the same ContractNo. And RightCodes can have 1 to x number of values per ContractNo
Thank you very much in advance.
Make some data (noting the terrible way to store data):
DECLARE #bad_table TABLE (
[row] INT,
contractno INT,
rightcodes VARCHAR(512));
INSERT INTO #bad_table SELECT 1, 28205, '904,908';
INSERT INTO #bad_table SELECT 2, 28205, '911,913';
INSERT INTO #bad_table SELECT 3, 28194, '904';
INSERT INTO #bad_table SELECT 4, 28194, '923,924';
INSERT INTO #bad_table SELECT 5, 28194, '922';
INSERT INTO #bad_table SELECT 6, 28192, '908,923';
INSERT INTO #bad_table SELECT 7, 28192, '911';
INSERT INTO #bad_table SELECT 8, 28193, '904';
Write a query to pull out the data as required:
SELECT
*
FROM
#bad_table b
WHERE
(b.rightcodes LIKE '%904%' OR b.rightcodes LIKE '%908%')
AND NOT EXISTS (SELECT * FROM #bad_table xb WHERE xb.contractno = b.contractno AND (xb.rightcodes LIKE '%922%' OR xb.rightcodes LIKE '%923%'));
This gives the results you want, rows 1 and 8.
I have 2 tables namely "Item" and "Messages".
Item table has the columns like Id, Amount, etc.
Messages table has the columns like ItemId, Count, Comment, etc.
Here the common link between these 2 tables is the "Id" from Item and "ItemId" from Messages.
The "Count" column in the Messages table is just the count of comments per ItemId. i.e. When user updates the comment for any record, an entry gets created in the Messages table and Count for that particular ItemId shows as 1. If user updates one more comment to same record, the Count shows 2 and so on. If user does not update comment for a certain record, the entry does not get created in Messages table at all (NULL).
I want to capture all the records from the Item table irrespective of whether user has updated comment or not. If there are 0 comments, the query should return NULL in the Comments column for that record. But, If the user has updated the comment, it should pick up the comment having the highest "Count". E.g. if one record has 8 comments, the query should return only the record where Messages.Count=8 and not all 8 records. If only one comment, then that comment should be seen.
I have written LEFT OUTER JOIN but not able to get through as it shows all 8 records. In the results, I find 7 records with NULL as the count and the 8th record showing count as 8 but I need only this 8th record and not the other 7.
Any help would be highly appreciated. Below is my query:
Select
Id,
Amount,
Messages.Comment As Comments
From Item
Left Outer Join Messages ON Messages.ItemId=Item.Id
Left Outer Join (Select ItemId, MAX(Id) as max_id from Messages Group by ItemId) T ON Messages.ItemId=T.ItemId and Messages.Id=T.max_id
Where amount > 100
I've hooked up an example using temp tables which I think covers what you're looking for. Just remove the temp table stuff and replace with your actual tables and it should work.
CREATE TABLE #Item ( ID int PRIMARY KEY,
Amount numeric(9,2))
CREATE TABLE #Messages ( ItemId int REFERENCES #Item(ID),
[Count] smallint,
Comment nvarchar(max))
INSERT INTO #Item (ID, Amount)
SELECT 1, 100
UNION
SELECT 2, 120
UNION
SELECT 3, 140
UNION
SELECT 4, 50
INSERT INTO #Messages ( ItemID,
[Count],
Comment)
SELECT 1, 1, 'Comment 1 - 1'
UNION
SELECT 1, 2, 'Comment 1 - 2'
UNION
SELECT 2, 1, 'Comment 2 - 1'
UNION
SELECT 2, 1, 'Comment 3 - 1'
UNION
SELECT 2, 2, 'Comment 3 - 2'
SELECT I.Id,
I.Amount,
M.Comment
FROM #Item AS I
OUTER APPLY ( SELECT TOP 1 M.Comment
FROM #Messages AS M
WHERE M.ItemId = I.ID
ORDER BY M.[Count] DESC) AS M
WHERE i.amount > 100
DROP TABLE #Messages
DROP TABLE #Item
go for it bro....
Select
Id,
Amount,
T.Comment As Comments
From Item
Left Outer Join (Select ItemId, MAX(Id) as max_id, Comments from Messages Group by ItemId) T ON Item.ItemId=T.ItemId
Where amount > 100
I need to to prove the existence of the amount of values from table1 in an MS SQL DB.
The table1 for proving has the following values:
MANDT DOKNR LFDNR
1 0020999956
1 0020999958
1 0020999960 2
1 0020999960 3
1 0020999960
1 0020999962
As you can see there are single rows and then there are special cases, where values are doubled with a running number (means it exists three times in the source), so all 2nd/3rd/further entries do get a increasing number in LFDNR.
The target table2 (where I need to proove for the amount/existance) has two columns with matching data:
DataID Facet
42101976 0020999956
42100240 0020999958
65688960 0020999960
65694287 0020999960
65697507 0020999960
42113401 0020999962
I would like to insert the DataID from 2nd table to the first table to have a 'proof', so to see if anything is missing from table2 and keep the table1 as proof.
I tried to uses joins and then I thought about a do while script running all rows down, but my knowledge stops creating scripts for this.
Edit:
Output should be then:
MANDT DOKNR LFDNR DataID
1 0020999956 42101976
1 0020999958 42100240
1 0020999960 2 65688960
1 0020999960 3 65694287
1 0020999960 65697507
1 0020999962 42113401
But it could be, for example, that a row in table 2 is missing, so a DataID would be empty then (and show that one is missing).
Any help appreciated!
You can use ROW_NUMBER to calculated [LFDNR] for each row in the second table, then to update the first table. If the [DataID] is null after the update, we have a mismatch.
DECLARE #table1 TABLE
(
[MANDT] INT
,[DOKNR] VARCHAR(32)
,[LFDNR] INT
,[DataID] INT
);
DECLARE #table2 TABLE
(
[DataID] INT
,[Facet] VARCHAR(32)
);
INSERT INTO #table1 ([MANDT], [DOKNR], [LFDNR])
VALUES (1, '0020999956', NULL)
,(1, '0020999958', NULL)
,(1, '0020999960', 2)
,(1, '0020999960', 3)
,(1, '0020999960', NULL)
,(1, '0020999962',NULL)
INSERT INTO #table2 ([DataID], [Facet])
VALUES (42101976, '0020999956')
,(42100240, '0020999958')
,(65688960, '0020999960')
,(65694287, '0020999960')
,(65697507, '0020999960')
,(42113401, '0020999962');
WITH DataSource ([DataID], [DOKNR], [LFDNR]) AS
(
SELECT *
,ROW_NUMBER() OVER (PARTITION BY [Facet] ORDER BY [DataID])
FROM #table2
)
UPDATE #table1
SET [DataID] = DS.[DataID]
FROM #table1 T
INNER JOIN DataSource DS
ON T.[DOKNR] = DS.[DOKNR]
AND ISNULL(T.[LFDNR], 1) = DS.[LFDNR];
SELECT *
FROM #table1;
I've tried to illustrate the problem in the (made-up) example below. Essentially, I want to filter records in the primary table based on content in a secondary table. When I attempted this using subqueries, our application performance took a big hit (some queries nearly 10x slower).
In this example I want to return all case notes for a customer EXCEPT for the ones that have references to products 1111 and 2222 in the detail table:
select cn.id, cn.summary from case_notes cn
where customer_id = 2
and exists (
select 1 from case_note_details cnd
where cnd.case_note_id = cn.id
and cnd.product_id not in (1111,2222)
)
I tried using a join as well:
select distinct cn.id, cn.summary from case_notes cn
join case_note_details cnd
on cnd.case_note_id = cn.id
and cnd.product_id not in (1111,2222)
where customer_id = 2
In both cases the execution plan shows two clustered index scans. Any suggestions for other methods or tweaks to improve performance?
Schema:
CREATE TABLE case_notes
(
id int primary key,
employee_id int,
customer_id int,
order_id int,
summary varchar(50)
);
CREATE TABLE case_note_details
(
id int primary key,
case_note_id int,
product_id int,
detail varchar(1024)
);
Sample data:
INSERT INTO case_notes
(id, employee_id, customer_id, order_id, summary)
VALUES
(1, 1, 2, 1000, 'complaint1'),
(2, 1, 2, 1001, 'complaint2'),
(3, 1, 2, 1002, 'complaint3'),
(4, 1, 2, 1003, 'complaint4');
INSERT INTO case_note_details
(id, case_note_id, product_id, detail)
VALUES
(1, 1, 1111, 'Note 1, order 1000, complaint about product 1111'),
(2, 1, 2222, 'Note 1, order 1000, complaint about product 2222'),
(3, 2, 1111, 'Note 2, order 1001, complaint about product 1111'),
(4, 2, 2222, 'Note 2, order 1001, complaint about product 2222'),
(5, 3, 3333, 'Note 3, order 1002, complaint about product 3333'),
(6, 3, 4444, 'Note 3, order 1002, complaint about product 4444'),
(7, 4, 5555, 'Note 4, order 1003, complaint about product 5555'),
(8, 4, 6666, 'Note 4, order 1003, complaint about product 6666');
You have a clustered index scan because you are not accessing your case_note_details table by its id but via non-indexed columns.
I suggest adding an index to the case-note_details table on case_note_id, product_id.
If you are always accessing the case_note_details via the case_note_id, you might also restructure your primary key to be case_note_id, detail_id. There is no need for an independent id as primary key for dependent records. This would let you re-use your detail primary key index for joins with the header table.
Edit: add an index on customer_id as well to the case_notes table, as Manuel Rocha suggested.
When using "exists" I always limit results with "TOP" as bellow:
select cn.id
,cn.summary
from case_notes as cn
where customer_id = 2
and exists (
select TOP 1 1
from case_note_details as cnd
where cnd.case_note_id = cn.id
and cnd.product_id not in (1111,2222)
)
In table case_notes create index for customer_id and on table case_note_details create index for case_note_id and case_note_id.
Then try execute both query. Should have better performance now.
Try also this query
select
cn.id,
cn.summary
from
case_notes cn
where
cn.customer_id = 2 and
cn.id in
(
select
distinct cnd.case_note_id
from
case_note_details cnd
where
cnd.product_id not in (1111,2222)
)
Did you try "in" instead of "exists". This sometimes performs differently:
select cn.id, cn.summary from case_notes cn
where customer_id = 2
and cn.id in (
select cnd.case_note_id from case_note_details cnd
where cnd.product_id not in (1111,2222)
)
Of course, check indexes.