SQL Server insert multiple rows efficiently - sql-server

I have an old classic ASP application which needs to insert many thousand rows into a SQL Server 2008 table. Currently the application is sending an INSERT command for each row separately, which takes a long time and meanwhile locks the table.
Is there a better way to do this? For example maybe:
Insert all rows into a temp table
and then do
SELECT INTO from the temp table
?

If you're generating the list of dates in the application itself, then you could probably generate them with the necessary additions to make this work.
In SQL Server 2008, you can insert multiple rows in a single command, which is a bit better than inserting row-by-row.
Here are a couple of examples of how you could do it, using a table variable for dummy data, and using GETDATE() to generate a few different dates (which you would obviously be generating in your application):
DECLARE #TABLE AS TABLE
(
RowID INT IDENTITY
,MyDate DATETIME
)
;
INSERT INTO #TABLE (MyDate)
VALUES
(GETDATE())
,(GETDATE()+1)
,(GETDATE()+2)
,(GETDATE()+3)
,(GETDATE()+4)
,(GETDATE()+5)
,(GETDATE()+6)
SELECT * FROM #TABLE
;
Returns:
RowID | MyDate
1 | 26/11/2017 10:51:49
2 | 27/11/2017 10:51:49
3 | 28/11/2017 10:51:49
4 | 29/11/2017 10:51:49
5 | 30/11/2017 10:51:49
6 | 01/12/2017 10:51:49
7 | 02/12/2017 10:51:49
You can also use this format:
INSERT INTO #TABLE (MyDate)
SELECT GETDATE()
UNION ALL
SELECT GETDATE() + 1
UNION ALL
SELECT GETDATE() + 2
UNION ALL
SELECT GETDATE() + 3
UNION ALL
SELECT GETDATE() + 4
UNION ALL
SELECT GETDATE() + 5
UNION ALL
SELECT GETDATE() + 6
;
SELECT * FROM #TABLE
;
Returns:
RowID | MyDate
1 | 26/11/2017 10:51:49
2 | 27/11/2017 10:51:49
3 | 28/11/2017 10:51:49
4 | 29/11/2017 10:51:49
5 | 30/11/2017 10:51:49
6 | 01/12/2017 10:51:49
7 | 02/12/2017 10:51:49
Not an ASP expert, but if you're concatenating the string in your application, you should be able to concatenate the string continuously rather than recreating it as a whole new INSERT statement for each date.

Related

How to remove extension dates in SQL Server

How to remove extension dates in SQL server?
FileName | id
-------------------------+---
c:\abc_20181008.txt | 1
c:\xyz_20181007.dat | 2
c:\abc_xyz_20181007.dat | 3
c:\ab.xyz_20181007.txt | 4
Based on above data I want output like below :
Table: emp
FileName | id
-------------------+---
c:\abc.txt | 1
c:\xyz.dat | 2
c:\abc_xyz.dat | 3
c:\ab.xyz.txt | 4
I have tried like this:
select
substring (Filename, replace(filename, '.', ''), len(filename)), id
from
emp
But this query is not returning the expected result in SQL Server.
Please tell me how to write a query to achieve this task in SQL Server.
You can use the following query:
SELECT id, filename,
LEFT(filename, LEN(filename) - i1) + RIGHT(filename, i2 - 1)
FROM emp
CROSS APPLY
(
SELECT CHARINDEX('_', REVERSE(filename)) AS i1,
PATINDEX('%[0-9]%', REVERSE(filename)) AS i2
) AS x
Demo here
You can try this as well:
declare #t table (a varchar(50))
insert into #t values ('c:\abc_20181008.txt')
insert into #t values ('c:\abc_xyz_20181007.dat')
insert into #t values ('c:\ab.xyz_20181007.txt')
insert into #t values ('c:\ab.xyz_20182007.txt')
select replace(SUBSTRING(a,1,CHARINDEX('2',a) - 1) + SUBSTRING(a,len(a)-3,LEN(a)),'_.','.') from #t

How can we take the sum of each columns in SQL Server without using ;with cte?

How can I take sum of each rows by two row sum in 3rd column?
Here's a screenshot to illustrate:
You can see for id 1 sum is 10 but for id 2 sum is 10+50 = 60
and third sum is 60+100 = 160 and so on.
With Cte it is working fine for me. I need with out ;with cte means though code I need the sum
Example will as shown below
DECLARE #t TABLE(ColumnA INT, ColumnB VARCHAR(50));
INSERT INTO #t
VALUES (10,'1'), (50,'2'), (100,'3'), (5,'4'), (45,'5');
;WITH cte AS
(
SELECT ColumnB, SUM(ColumnA) asum
FROM #t
GROUP BY ColumnB
), cteRanked AS
(
SELECT asum, ColumnB, ROW_NUMBER() OVER(ORDER BY ColumnB) rownum
FROM cte
)
SELECT
(SELECT SUM(asum)
FROM cteRanked c2
WHERE c2.rownum <= c1.rownum) AS ColumnA,
ColumnB
FROM
cteRanked c1;
One option, which doesn't require explicit analytic functions, would be to use a correlated subquery to calculate the running total:
SELECT
t1.ID,
t1.Currency,
(SELECT SUM(t2.Currency) FROM yourTable t2 WHERE t2.ID <= t1.ID) AS Sum
FROM yourTable t1
Output:
Demo here:
Rextester
It looks like you need a simple running total.
There is an easy and efficient way to calculate running total in SQL Server 2012 and later. You can use SUM(...) OVER (ODER BY ...), like in the example below:
Sample data
DECLARE #t TABLE(ColumnA INT, ColumnB VARCHAR(50));
INSERT INTO #t
VALUES (10,'1'), (50,'2'), (100,'3'), (5,'4'), (45,'5');
Query
SELECT
ColumnB
,ColumnA
,SUM(ColumnA) OVER (ORDER BY ColumnB
ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS SumColumnA
FROM #t
ORDER BY ColumnB;
Result
+---------+---------+------------+
| ColumnB | ColumnA | SumColumnA |
+---------+---------+------------+
| 1 | 10 | 10 |
| 2 | 50 | 60 |
| 3 | 100 | 160 |
| 4 | 5 | 165 |
| 5 | 45 | 210 |
+---------+---------+------------+
For SQL Server 2008 and below you need to use either correlated sub-queries as you do already or a simple cursor, which may be faster if the table is large.

SQL Server : Bulk insert a Datatable into 2 tables

Consider this datatable :
word wordCount documentId
---------- ------- ---------------
Ball 10 1
School 11 1
Car 4 1
Machine 3 1
House 1 2
Tree 5 2
Ball 4 2
I want to insert these data into two tables with this structure :
Table WordDictionary
(
Id int,
Word nvarchar(50),
DocumentId int
)
Table WordDetails
(
Id int,
WordId int,
WordCount int
)
FOREIGN KEY (WordId) REFERENCES WordDictionary(Id)
But because I have thousands of records in initial table, I have to do this just in one transaction (batch query) for example using bulk insert can help me doing this purpose.
But the question here is how I can separate this data into these two tables WordDictionary and WordDetails.
For more details :
Final result must be like this :
Table WordDictionary:
Id word
---------- -------
1 Ball
2 School
3 Car
4 Machine
5 House
6 Tree
and table WordDetails :
Id wordId WordCount DocumentId
---------- ------- ----------- ------------
1 1 10 1
2 2 11 1
3 3 4 1
4 4 3 1
5 5 1 2
6 6 5 2
7 1 4 2
Notice :
The words in the source can be duplicated so I must check word existence in table WordDictionary before any insert record in these tables and if a word is found in table WordDictionary, the just found Word ID must be inserted into table WordDetails (please see Word Ball)
Finally the 1 M$ problem is: this insertion must be done as fast as possible.
If you're looking to just load the table the first time without any updates to the table over time you could potentially do it this way (I'm assuming you've already created the tables you're loading into):
You can put all of the distinct words from the datatable into the WordDictionary table first:
SELECT DISTINCT word
INTO WordDictionary
FROM datatable;
Then after you populate your WordDictionary you can then use the ID values from it and the rest of the information from datatable to load your WordDetails table:
SELECT WD.Id as wordId, DT.wordCount as WordCount, DT.documentId AS DocumentId
INTO WordDetails
FROM datatable as DT
INNER JOIN WordDictionary AS WD ON WD.word = DT.word
There a little discrepancy between declared table schema and your example data, but it was solved:
1) Setup
-- this the table with the initial data
-- drop table DocumentWordData
create table DocumentWordData
(
Word NVARCHAR(50),
WordCount INT,
DocumentId INT
)
GO
-- these are result table with extra information (identity, primary key constraints, working foreign key definition)
-- drop table WordDictionary
create table WordDictionary
(
Id int IDENTITY(1, 1) CONSTRAINT PK_WordDictionary PRIMARY KEY,
Word nvarchar(50)
)
GO
-- drop table WordDetails
create table WordDetails
(
Id int IDENTITY(1, 1) CONSTRAINT PK_WordDetails PRIMARY KEY,
WordId int CONSTRAINT FK_WordDetails_Word REFERENCES WordDictionary,
WordCount int,
DocumentId int
)
GO
2) The actual script to put data in the last two tables
begin tran
-- this is to make sure that if anything in this block fails, then everything is automatically rolled back
set xact_abort on
-- the dictionary is obtained by considering all distinct words
insert into WordDictionary (Word)
select distinct Word
from DocumentWordData
-- details are generating from initial data joining the word dictionary to get word id
insert into WordDetails (WordId, WordCount, DocumentId)
SELECT W.Id, DWD.WordCount, DWD.DocumentId
FROM DocumentWordData DWD
JOIN WordDictionary W ON W.Word = DWD.Word
commit
-- just to test the results
select * from WordDictionary
select * from WordDetails
I expect this script to run very fast, if you do not have a very large number of records (millions at most).
This is the query. I'm using temp table to be able to test.
if you use the 2 CTEs, you'll be able to generate the final result
1.Setting up a sample data for test.
create table #original (word varchar(10), wordCount int, documentId int)
insert into #original values
('Ball', 10, 1),
('School', 11, 1),
('Car', 4, 1),
('Machine', 3, 1),
('House', 1, 2),
('Tree', 5, 2),
('Ball', 4, 2)
2. Use cte1 and cte2. In your real database, you need to replace #original with the actual table name you have all initial records.
;with cte1 as (
select ROW_NUMBER() over (order by word) Id, word
from #original
group by word
)
select * into #WordDictionary
from cte1
;with cte2 as (
select ROW_NUMBER() over (order by #original.word) Id, Id as wordId,
#original.word, #original.wordCount, #original.documentId
from #WordDictionary
inner join #original on #original.word = #WordDictionary.word
)
select * into #WordDetails
from cte2
select * from #WordDetails
This will be data in #WordDetails
+----+--------+---------+-----------+------------+
| Id | wordId | word | wordCount | documentId |
+----+--------+---------+-----------+------------+
| 1 | 1 | Ball | 10 | 1 |
| 2 | 1 | Ball | 4 | 2 |
| 3 | 2 | Car | 4 | 1 |
| 4 | 3 | House | 1 | 2 |
| 5 | 4 | Machine | 3 | 1 |
| 6 | 5 | School | 11 | 1 |
| 7 | 6 | Tree | 5 | 2 |
+----+--------+---------+-----------+------------+

Group SQL Server Select Results into different Result tables

Is it possible in SQL Server to "group" a result from a single query based on data in a specific column as if I ran multiple select queries?
I'm trying to find a lazy way out to extract data such as the below:
StoreId | ClientId
1 | 4
1 | 5
2 | 5
2 | 6
2 | 7
3 | 8
whereby every store ID result is grouped into its own table.
Whilst I can create a select statement for every store id to have it grouped, the list is too long to do so.
I can't imagine that this is really helpful but you can use dynamic sql to do something like this. I can't say I would recommend this approach for generating excel documents but whatever.
create table #Something
(
StoreID int
, ClientID int
)
insert #Something
select 1, 4 union all
select 1, 5 union all
select 2, 5 union all
select 2, 6 union all
select 2, 7
declare #sql nvarchar(max) = ''
select #sql = #sql + 'select StoreID, ClientID from #Something where StoreID = ' + CAST(StoreID as varchar(4)) + ';'
from #Something
group by StoreID
select #sql
exec sp_executesql #sql
drop table #Something

sql inserr based on maximum date

I have to just insert the value from one table into another but the condition is that out of same id I have to select that one having maximum date and then insert into another. like :
table 1
a | b
1 | 12/1/13
1 | 18/1/13
2 | 2/4/13
2 | 9/8/13
table 2
a | b
1 | 18/1/13
2 | 9/8/13
please suggest the SQL query for it
Could you try :
INSERT INTO Table2 (idcolumn, datecolumn)
SELECT DISTINCT idcolumn, datecolumn
FROM Table1
GROUP BY idcolumn
ORDER BY datecolumn DESC
INSERT INTO table2(a,b)
SELECT a, MAX(b) AS b
FROM table1
GROUP BY a;

Resources