I have a table with about 70000000 rows of phone numbers. I use OFFSET to read those 50 by 50 numbers.
But it takes a long time (about 1 min).
However, that full-text index used for search and does not impact for offset.
How I can speed up my query?
SELECT *
FROM tblPhoneNumber
WHERE CountryID = #CountryID
ORDER BY ID
OFFSET ((#NumberCount - 1) * #PackageSize) ROWS
FETCH NEXT #PackageSize ROWS ONLY
Throw a sequence on that table, index it and fetch ranges by sequence. You could alternatively just use the ID column.
select *
FROM tblPhoneNumber
WHERE
CountryID = #CountryID
and Sequence between #NumberCount and (#NumberCount + #PackageSize)
If you're inserting/deleting frequently, this can leave gaps, so depending on the code that utilizes these batches of numbers, this might be a problem, but in general a few gaps here and there may not be a problem for you.
Try using CROSS APPLY instead of OFFSET FETCH and do it all in one go. I grab TOP 2 to show you that you can grab any number of rows.
IF OBJECT_ID('tempdb..#tblPhoneNumber') IS NOT NULL
DROP TABLE #tblPhoneNumber;
IF OBJECT_ID('tempdb..#Country') IS NOT NULL
DROP TABLE #Country;
CREATE TABLE #tblPhoneNumber (ID INT, Country VARCHAR(100), PhoneNumber INT);
CREATE TABLE #Country (Country VARCHAR(100));
INSERT INTO #Country
VALUES ('USA'),('UK');
INSERT INTO #tblPhoneNumber
VALUES (1,'USA',11111),
(2,'USA',22222),
(3,'USA',33333),
(4,'UK',44444),
(5,'UK',55555),
(6,'UK',66666);
SELECT *
FROM #Country
CROSS APPLY(
SELECT TOP (2) ID,Country,PhoneNumber --Just change to TOP(50) for your code
FROM #tblPhoneNumber
WHERE #Country.Country = #tblPhoneNumber.Country
) CA
Related
I have the data ready to Insert into my Production table however the ID column is NULL and that needs to be pre-populated with the IDs prior to Insert. I have these IDs in another Temp Table... all I want is to simply apply these IDs to the records in my Temp Table.
For example... Say I have 10 records all simply needing IDs. I have in another temp table exactly 10 IDs... they simply need to be applied to my 10 records in my 'Ready to INSERT' Temp Table.
I worked in Oracle for about 9 years and I would have done this simply by looping over my 'Collection' using a FORALL Loop... basically I would simply loop over my 'Ready to INSERT' temp table and for each row apply the ID from my other 'Collection'... in SQL Server I'm working with Temp Tables NOT Collections and well... there's no FORALL Loop or really any fancy loops in SQL Server other than WHILE.
My goal is to know the appropriate method to accomplish this in SQL Server. I have learned that in the SQL Server world so many of the DML operations are all SET Based whereas when I worked in oracle we handled data via arrays/collections and using CURSORS or LOOPs we would simply iterate thru the data. I've seen in the SQL Server world using CURSORS and/or iterating thru data record by record is frowned upon.
Help me get my head out of the 'Oracle' space I was in for so long and into the 'SQL Server' space I need to be in. This has been a slight struggle.
The code below is how I've currently implemented this however it just seems convoluted.
SET NOCOUNT ON;
DECLARE #KeyValueNewMAX INT,
#KeyValueINuse INT,
#ClientID INT,
#Count INT;
DROP TABLE IF EXISTS #InterOtherSourceData;
DROP TABLE IF EXISTS #InterOtherActual;
DROP TABLE IF EXISTS #InterOtherIDs;
CREATE TABLE #InterOtherSourceData -- Data stored here for DML until data is ready for INSERT
(
UniqueID INT IDENTITY( 1, 1 ),
NewIntOtherID INT,
ClientID INT
);
CREATE TABLE #InterOtherActual -- Prod Table where the data will be INSERTED Into
(
IntOtherID INT,
ClientID INT
);
CREATE TABLE #InterOtherIDs -- Store IDs needing to be applied to Data
(
UniqueID INT IDENTITY( 1, 1 ),
NewIntOtherID INT
);
BEGIN
/* TEST Create Fake Data and store it in temp table */
WITH fakeIntOtherRecs AS
(
SELECT 1001 AS ClientID, 'Jake' AS fName, 'Jilly' AS lName UNION ALL
SELECT 2002 AS ClientID, 'Jason' AS fName, 'Bateman' AS lName UNION ALL
SELECT 3003 AS ClientID, 'Brain' AS fName, 'Man' AS lName
)
INSERT INTO #InterOtherSourceData (ClientID)
SELECT fc.ClientID--, fc.fName, fc.lName
FROM fakeIntOtherRecs fc
;
/* END TEST Prep Fake Data */
/* Obtain count so we know how many IDs we need to create */
SELECT #Count = COUNT(*) FROM #InterOtherSourceData;
PRINT 'Count: ' + CAST(#Count AS VARCHAR);
/* For testing set value OF KeyValuePre to the max key currently in use by Table */
SELECT #KeyValueINuse = 13;
/* Using the #Count let's obtain the new MAX ID... basically Existing_Key + SourceRecordCount = New_MaxKey */
SELECT #KeyValueNewMAX = #KeyValueINuse + #Count /* STORE new MAX ID in variable */
/* Print both keys for testing purposes to review */
PRINT 'KeyValue Current: ' + CAST(#KeyValueINuse AS VARCHAR) + ' KeyValue Max: ' + CAST(#KeyValueNewMAX AS VARCHAR);
/* Using recursive CTE generate a fake table containing all of the IDs we want to INSERT into Prod Table */
WITH CTE AS
(
SELECT (#KeyValueNewMAX - #Count) + 1 AS STARTMINID, #KeyValueNewMAX AS ENDMAXID UNION ALL
/* SELECT FROM CTE to create Recursion */
SELECT STARTMINID + 1 AS STARTMINID, ENDMAXID FROM CTE
WHERE (STARTMINID + 1) < (#KeyValueNewMAX + 1)
)
INSERT INTO #InterOtherIDs (NewIntOtherID)
SELECT c.STARTMINID AS NewIntOtherID
FROM CTE c
;
/* Apply New IDs : Using the IDENTITY fields on both Temp Tables I can JOIN the tables by the IDENTITY columns
| Is there a BETTER Way to do this?... like LOOP over each record rather than having to build up common IDs in both tables using IDENTITY columns?
*/
UPDATE #InterOtherSourceData SET NewIntOtherID = oi.NewIntOtherID
FROM #InterOtherIDs oi
JOIN #InterOtherSourceData o ON o.UniqueID = oi.UniqueID
;
/* View data that is ready for insert */
--SELECT *
--FROM #InterOtherSourceData
--;
/* INSERT DATA INTO PRODUCTION TABLE */
INSERT INTO #InterOtherActual (IntOtherID, ClientId)
SELECT NewIntOtherID, ClientID
FROM #InterOtherSourceData
;
SELECT * FROM #InterOtherActual;
END
To pre-generate key values in SQL Server use a sequence rather than an IDENTITY column.
eg
drop table if exists t
drop table if exists #t_stg
drop sequence t_seq
go
create sequence t_seq start with 1 increment by 1
create table t(id int primary key default (next value for t_seq),a int, b int)
create table #t_stg(id int, a int, b int)
insert into #t_stg(a,b) values (1,2),(3,3),(4,5)
update #t_stg set id = next value for t_seq
--select * from #t_stg
insert into t(id,a,b)
select * from #t_stg
I have text stored in the table "StructureStrings"
Create Table StructureStrings(Id INT Primary Key,String nvarchar(4000))
Sample Data:
Id String
1 Select * from Employee where Id BETWEEN ### and ### and Customer Id> ###
2 Select * from Customer where Id BETWEEN ### and ###
3 Select * from Department where Id=###
and I want to replace the "###" word with a values fetched from another table
named "StructureValues"
Create Table StructureValues (Id INT Primary Key,Value nvarcrhar(255))
Id Value
1 33
2 20
3 44
I want to replace the "###" token present in the strings like
Select * from Employee where Id BETWEEN 33 and 20 and Customer Id> 44
Select * from Customer where Id BETWEEN 33 and 20
Select * from Department where Id=33
PS: 1. Here an assumption is that the values will be replaced with the tokens in the same order i.e first occurence of "###" will be replaced by first value of
"StructureValues.Value" column and so on.
Posting this as a new answer, rather than editting my previous.
This uses Jeff Moden's DelimitedSplit8K; it does not use the built in splitter available in SQL Server 2016 onwards, as it does not provide an item number (thus no join criteria).
You'll need to firstly put the function on your server, then you'll be able to use this. DO NOT expect it to perform well. There's a lot of REPLACE in this, which will hinder performance.
SELECT (SELECT REPLACE(DS.Item, '###', CONVERT(nvarchar(100), SV.[Value]))
FROM StructureStrings sq
CROSS APPLY DelimitedSplit8K (REPLACE(sq.String,'###','###|'), '|') DS --NOTE this uses a varchar, not an nvarchar, you may need to change this if you really have Unicode characters
JOIN StructureValues SV ON DS.ItemNumber = SV.Id
WHERE SS.Id = sq.id
FOR XML PATH ('')) AS NewString
FROM StructureStrings SS;
If you have any question, please place the comments on this answer; do not put them under the question which has already become quite a long discussion.
Maybe this is what you are looking for.
DECLARE #Employee TABLE (Id int)
DECLARE #StructureValues TABLE (Id int, Value int)
INSERT INTO #Employee
VALUES (1), (2), (3), (10), (15), (20), (21)
INSERT INTO #StructureValues
VALUES (1, 10), (2, 20)
SELECT *
FROM #Employee
WHERE Id BETWEEN (SELECT MIN(Value) FROM #StructureValues) AND (SELECT MAX(Value) FROM #StructureValues)
Very different take here:
CREATE TABLE StructureStrings(Id int PRIMARY KEY,String nvarchar(4000));
INSERT INTO StructureStrings
VALUES (1,'SELECT * FROM Employee WHERE Id BETWEEN ### AND ###'),
(2,'SELECT * FROM Customer WHERE Id BETWEEN ### AND ###');
CREATE TABLE StructureValues (Id int, [Value] int);
INSERT INTO StructureValues
VALUES (1,10),
(2,20);
GO
DECLARE #SQL nvarchar(4000);
--I'm asuming that as you gave one output you are supplying an ID or something?
DECLARE #Id int = 1;
WITH CTE AS(
SELECT SS.Id,
SS.String,
SV.[Value],
LEAD([Value]) OVER (ORDER BY SV.Id) AS NextValue,
STUFF(SS.String,PATINDEX('%###%',SS.String),3,CONVERT(varchar(10),[Value])) AS ReplacedString
FROM StructureStrings SS
JOIN StructureValues SV ON SS.Id = SV.Id)
SELECT #SQL = STUFF(ReplacedString,PATINDEX('%###%',ReplacedString),3,CONVERT(varchar(10),NextValue))
FROM CTE
WHERE Id = #Id;
PRINT #SQL;
--EXEC (#SQL); --yes, I should really be using sp_executesql
GO
DROP TABLE StructureValues;
DROP TABLE StructureStrings;
Edit: Note that Id 2 will return NULL, as there isn't a value to LEAD to. If this needs to change, we'll need more logic on what the value should be if there is not value to LEAD to.
Edit 2: This was based on the OP's original post, not what he puts it as later. As it currently stands, it's impossible.
I have a material table as source table as follows :
CREATE TABLE dbo.MATERIAL (
ID int IDENTITY(1,1) NOT NULL,
CATEGORY_ID int NOT NULL,
SECTION_ID int NOT NULL,
STATUS_ID int NOT NULL
)
also I have a Order table as the target table as follows :
CREATE TABLE dbo.ORDER (
ID int IDENTITY(1,1) NOT NULL,
MATERIAL_ID int NOT NULL
)
I am sending ome data to SQL server to a Stored Procedure and have the following Temp Table created
DECLARE #temptable TABLE (
CATEGORY_ID int NOT NULL,
SECTION_ID int NOT NULL,
STATUS_ID int NOT NULL,
COUNT int NOT NULL
)
and had it filled with data as follows :
CATEGORY_ID SECTION_ID STATUS_ID COUNT
----------- ---------- --------- -----
3 8 1 10
8 2 2 11
4 6 1 8
What I want is to match COUNT number of materials from MATERIAL table which matches ith the given CATEGORY_ID, SECTION_ID and STATUS_ID triple of the same row; then insert those records's IDs to the target table, ORDER.
How can I acomplish this task?
Regards.
I think I have found the solution. I should have used CURSOR. The following code does the trick :
DECLARE #MATERIAL_ID int
DECLARE #CATEGORY_ID int
DECLARE #SECTION_ID int
DECLARE #STATUS_ID int
DECLARE #COUNT int
DECLARE cur CURSOR LOCAL FOR
SELECT
CATEGORY_ID,
SECTION_ID,
STATUS_ID,
COUNT
FROM
#temptable
OPEN cur
FETCH NEXT FROM cur INTO #CATEGORY_ID, #SECTION_ID, #STATUS_ID, #COUNT
WHILE ##FETCH_STATUS = 0
BEGIN
INSERT INTO
dbo.ORDER
(MATERIAL_ID)
SELECT
TOP (#COUNT) ID
FROM
dbo.MATERIAL AS m
WHERE
m.CATEGORY_ID = #CATEGORY_ID
AND m.SECTION_ID = #SECTION_ID
AND m.STATUS_ID = #STATUS_ID
FETCH NEXT FROM cur INTO #CATEGORY_ID, #SECTION_ID, #STATUS_ID, #COUNT
END
CLOSE cur
DEALLOCATE cur
Threw away a bunch of previous work now that I think I understand requirement.
Here's a working SQL FIDDLE:
This generates a data set called GENROWS that contains rows equal to the count of the max count in temptable. It does this by using a recursive Common table expression (CTE) to generate 1 row for each count of the max count in temptable.
It then uses this data set to join to temptable and material to generate the # of times the material needs to be inserted in order.
I'm not a big fan of using reserved words so I adjusted order to morder and I would recommend adjusting column count otherwise you'll be stuck wrapping words in []'s from time to time.
NOTE: This assumes there will not be duplicates (records with same category_ID, Section_Id and Status_ID) in material table. If there are; then this may or may not behave as expected.
And lastly now that I have a better understanding of what you're after I'm not positive you'll see much of a performance gain compared to using cursors. As the rows have to be generated somehow. This still may work a bit faster because we generate the set and insert all at once as opposed to individually. But there's overhead with producing storing and retrieving the data set produced which may offset this gain. Only testing would tell.
WITH
GenRows (RowNumber, Val) AS (
-- Anchor member definition
SELECT 1 AS RowNumber, (Select max(count) val from temptable) val
UNION ALL
-- Recursive member definition
SELECT a.RowNumber + 1 AS RowNumber, a.val
FROM GenRows a
WHERE a.RowNumber < a.val
)
Insert into morder (Material_ID)
SELECT A.ID
FROM material A
INNER JOIN temptable B
on A.Category_ID = B.Category_ID
and A.Section_Id = B.Section_Id
and A.Status_Id = B.Status_ID
INNER JOIN GenRows
on GenRows.RowNumber <= b.[count]
We have found that SQL Server is using an index scan instead of an index seek if the where clause contains parametrized values instead of string literal.
Following is an example:
SQL Server performs index scan in following case (parameters in where clause)
declare #val1 nvarchar(40), #val2 nvarchar(40);
set #val1 = 'val1';
set #val2 = 'val2';
select
min(id)
from
scor_inv_binaries
where
col1 in (#val1, #val2)
group by
col1
On the other hand, the following query performs an index seek:
select
min(id)
from
scor_inv_binaries
where
col1 in ('val1', 'val2')
group by
col1
Has any one observed similar behavior, and how they have fixed this to ensure that query performs index seek instead of index scan?
We are not able to use forceseek table hint, because forceseek is supported on SQL Sserver 2005.
I have updated the statistics as well.
Thank you very much for help.
Well to answer your question why SQL Server is doing this, the answer is that the query is not compiled in a logical order, each statement is compiled on it's own merit,
so when the query plan for your select statement is being generated, the optimiser does not know that #val1 and #Val2 will become 'val1' and 'val2' respectively.
When SQL Server does not know the value, it has to make a best guess about how many times that variable will appear in the table, which can sometimes lead to sub-optimal plans. My main point is that the same query with different values can generate different plans. Imagine this simple example:
IF OBJECT_ID(N'tempdb..#T', 'U') IS NOT NULL
DROP TABLE #T;
CREATE TABLE #T (ID INT IDENTITY PRIMARY KEY, Val INT NOT NULL, Filler CHAR(1000) NULL);
INSERT #T (Val)
SELECT TOP 991 1
FROM sys.all_objects a
UNION ALL
SELECT TOP 9 ROW_NUMBER() OVER(ORDER BY a.object_id) + 1
FROM sys.all_objects a;
CREATE NONCLUSTERED INDEX IX_T__Val ON #T (Val);
All I have done here is create a simple table, and add 1000 rows with values 1-10 for the column val, however 1 appears 991 times, and the other 9 only appear once. The premise being this query:
SELECT COUNT(Filler)
FROM #T
WHERE Val = 1;
Would be more efficient to just scan the entire table, than use the index for a seek, then do 991 bookmark lookups to get the value for Filler, however with only 1 row the following query:
SELECT COUNT(Filler)
FROM #T
WHERE Val = 2;
will be more efficient to do an index seek, and a single bookmark lookup to get the value for Filler (and running these two queries will ratify this)
I am pretty certain the cut off for a seek and bookmark lookup actually varies depending on the situation, but it is fairly low. Using the example table, with a bit of trial and error, I found that I needed the Val column to have 38 rows with the value 2 before the optimiser went for a full table scan over an index seek and bookmark lookup:
IF OBJECT_ID(N'tempdb..#T', 'U') IS NOT NULL
DROP TABLE #T;
DECLARE #I INT = 38;
CREATE TABLE #T (ID INT IDENTITY PRIMARY KEY, Val INT NOT NULL, Filler CHAR(1000) NULL);
INSERT #T (Val)
SELECT TOP (991 - #i) 1
FROM sys.all_objects a
UNION ALL
SELECT TOP (#i) 2
FROM sys.all_objects a
UNION ALL
SELECT TOP 8 ROW_NUMBER() OVER(ORDER BY a.object_id) + 2
FROM sys.all_objects a;
CREATE NONCLUSTERED INDEX IX_T__Val ON #T (Val);
SELECT COUNT(Filler), COUNT(*)
FROM #T
WHERE Val = 2;
So for this example the limit is 3.7% of matching rows.
Since the query does not know the how many rows will match when you are using a variable it has to guess, and the simplest way is by finding out the total number rows, and dividing this by the total number of distinct values in the column, so in this example the estimated number of rows for WHERE val = #Val is 1000 / 10 = 100, The actual algorithm is more complex than this, but for example's sake this will do. So when we look at the execution plan for:
DECLARE #i INT = 2;
SELECT COUNT(Filler)
FROM #T
WHERE Val = #i;
We can see here (with the original data) that the estimated number of rows is 100, but the actual rows is 1. From the previous steps we know that with more than 38 rows the optimiser will opt for a clustered index scan over an index seek, so since the best guess for the number of rows is higher than this, the plan for an unknown variable is a clustered index scan.
Just to further prove the theory, if we create the table with 1000 rows of numbers 1-27 evenly distributed (so the estimated row count will be approximately 1000 / 27 = 37.037)
IF OBJECT_ID(N'tempdb..#T', 'U') IS NOT NULL
DROP TABLE #T;
CREATE TABLE #T (ID INT IDENTITY PRIMARY KEY, Val INT NOT NULL, Filler CHAR(1000) NULL);
INSERT #T (Val)
SELECT TOP 27 ROW_NUMBER() OVER(ORDER BY a.object_id)
FROM sys.all_objects a;
INSERT #T (val)
SELECT TOP 973 t1.Val
FROM #T AS t1
CROSS JOIN #T AS t2
CROSS JOIN #T AS t3
ORDER BY t2.Val, t3.Val;
CREATE NONCLUSTERED INDEX IX_T__Val ON #T (Val);
Then run the query again, we get a plan with an index seek:
DECLARE #i INT = 2;
SELECT COUNT(Filler)
FROM #T
WHERE Val = #i;
So hopefully that pretty comprehensively covers why you get that plan. Now I suppose the next question is how do you force a different plan, and the answer is, to use the query hint OPTION (RECOMPILE), to force the query to compile at execution time when the value of the parameter is known. Reverting to the original data, where the best plan for Val = 2 is a lookup, but using a variable yields a plan with an index scan, we can run:
DECLARE #i INT = 2;
SELECT COUNT(Filler)
FROM #T
WHERE Val = #i;
GO
DECLARE #i INT = 2;
SELECT COUNT(Filler)
FROM #T
WHERE Val = #i
OPTION (RECOMPILE);
We can see that the latter uses the index seek and key lookup because it has checked the value of variable at execution time, and the most appropriate plan for that specific value is chosen. The trouble with OPTION (RECOMPILE) is that means you can't take advantage of cached query plans, so there is an additional cost of compiling the query each time.
I had this exact problem and none of query option solutions seemed to have any effect.
Turned out I was declaring an nvarchar(8) as the parameter and the table had a column of varchar(8).
Upon changing the parameter type, the query did an index seek and ran instantaneously. Must be the optimizer was getting messed up by the conversion.
This may not be the answer in this case, but something that's worth checking.
Try
declare #val1 nvarchar(40), #val2 nvarchar(40);
set #val1 = 'val1';
set #val2 = 'val2';
select
min(id)
from
scor_inv_binaries
where
col1 in (#val1, #val2)
group by
col1
OPTION (RECOMPILE)
What datatype is col1?
Your variables are nvarchar whereas your literals are varchar/char; if col1 is varchar/char it may be doing the index scan to implicitly cast each value in col1 to nvarchar for the comparison.
I guess first query is using predicate and second query is using seek predicate.
Seek Predicate is the operation that describes the b-tree portion of the Seek. Predicate is the operation that describes the additional filter using non-key columns. Based on the description, it is very clear that Seek Predicate is better than Predicate as it searches indexes whereas in Predicate, the search is on non-key columns – which implies that the search is on the data in page files itself.
For more details please visit:-
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/36a176c8-005e-4a7d-afc2-68071f33987a/predicate-and-seek-predicate
I'm currently working on a data export feature for a survey application. We are using SQL2k8. We store data in a normalized format: QuestionId, RespondentId, Answer. We have a couple other tables that define what the question text is for the QuestionId and demographics for the RespondentId...
Currently I'm using some dynamic SQL to generate a pivot that joins the question table to the answer table and creates an export, its working... The problem is that it seems slow and we don't have that much data (less than 50k respondents).
Right now I'm thinking "why am I 'paying' to de-aggregate the data for each query? Why don't I cache that?" The data being exported is based on dynamic criteria. It could be "give me respondents that completed on x date (or range)" or "people that like blue", etc. Because of that, I think I have to cache at the respondent level, find out what respondents are being exported and then select their combined cached de-aggregated data.
To me the quick and dirty fix is a totally flat table, RespondentId, Question1, Question2, etc. The problem is, we have multiple clients and that doesn't scale AND I don't want to have to maintain the flattened table as the survey changes.
So I'm thinking about putting an XML column on the respondent table and caching the results of a SELECT * FROM Data FOR XML AUTO WHERE RespondentId = x. With that in place, I would then be able to get my export with filtering and XML calls into the XML column.
What are you doing to export aggregated data in a flattened format (CSV, Excel, etc)? Does this approach seem ok? I worry about the cost of XML functions on larger result sets (think SELECT RespondentId, XmlCol.value('//data/question_1', 'nvarchar(50)') AS [Why is there air?], XmlCol.RinseAndRepeat)...
Is there a better technology/approach for this?
Thanks!
EDIT: SQL Block for testing.
Run steps 1 & 2 to prime the data, test with step 3, clean up with step 4...
At a thousand respondents by one hundred questions, it already seems slower than I'd like.
SET NOCOUNT ON;
-- step 1 - create seed data
CREATE TABLE #Questions (QuestionId INT PRIMARY KEY IDENTITY (1,1), QuestionText VARCHAR(50));
CREATE TABLE #Respondents (RespondentId INT PRIMARY KEY IDENTITY (1,1), Name VARCHAR(50));
CREATE TABLE #Data (QuestionId INT NOT NULL, RespondentId INT NOT NULL, Answer INT);
DECLARE #QuestionTarget INT = 100
,#QuestionCount INT = 0
,#RespondentTarget INT = 1000
,#RespondentCount INT = 0
,#RespondentId INT;
WHILE #QuestionCount < #QuestionTarget BEGIN
INSERT INTO #Questions(QuestionText) VALUES(CAST(NEWID() AS CHAR(36)));
SET #QuestionCount = #QuestionCount + 1;
END;
WHILE #RespondentCount < #RespondentTarget BEGIN
INSERT INTO #Respondents(Name) VALUES(CAST(NEWID() AS CHAR(36)));
SET #RespondentId = SCOPE_IDENTITY();
SET #QuestionCount = 1;
WHILE #QuestionCount <= #QuestionTarget BEGIN
INSERT INTO #Data(QuestionId, RespondentId, Answer)
VALUES(#QuestionCount, #RespondentId, ROUND(((10 - 1 -1) * RAND() + 1), 0));
SET #QuestionCount = #QuestionCount + 1;
END;
SET #RespondentCount = #RespondentCount + 1;
END;
-- step 2 - index seed data
ALTER TABLE #Data ADD CONSTRAINT [PK_Data] PRIMARY KEY CLUSTERED (QuestionId ASC, RespondentId ASC);
CREATE INDEX DataRespondentQuestion ON #Data (RespondentId ASC, QuestionId ASC);
-- step 3 - query data
DECLARE #Columns NVARCHAR(MAX)
,#TemplateSQL NVARCHAR(MAX)
,#RunSQL NVARCHAR(MAX);
SELECT #Columns = STUFF(
(
SELECT DISTINCT '],[' + q.QuestionText
FROM #Questions AS q
ORDER BY '],[' + q.QuestionText
FOR XML PATH('')
), 1, 2, '') + ']';
SET #TemplateSql =
'SELECT *
FROM
(
SELECT r.Name, q.QuestionText, d.Answer
FROM #Respondents AS r
INNER JOIN #Data AS d ON d.RespondentId = r.RespondentId
INNER JOIN #Questions AS q ON q.QuestionId = d.QuestionId
) AS d
PIVOT
(
MAX(d.Answer)
FOR d.QuestionText
IN (xxCOLUMNSxx)
) AS p;';
SET #RunSql = REPLACE(#TemplateSql, 'xxCOLUMNSxx', #Columns)
EXECUTE sys.sp_executesql #RunSql;
-- step 4 - clean up
DROP INDEX DataRespondentQuestion ON #Data;
DROP TABLE #Data;
DROP TABLE #Questions;
DROP TABLE #Respondents;
No, your approach does not seem ok. Keep your normalized data. If you have proper keys, the "cost" to deaggregate will be minimal. To further optimize your performance, stop using dynamic SQL. Write some cleverly written queries and encapsulate them in stored procedures. This will allow SQL server to cache the query plans instead of rebuilding them every time.
Before you do any of this, however, check the query plan. It is also possible that you are forgetting an index on at least one of the fields you are searching on, which will result in a full table scan of data. You may be able to drastically increase your performance with a few well placed indexes.