I want this procedure change the table name when I execute it.
The table name that I want to change is Recargas_#mes
There is some way to do that?
#MES DATETIME
AS
BEGIN
SELECT CUENTA, SUM(COSTO_REC) COSTO_REC
INTO E09040_DEV.BI_PRO_COSTO_RECARGAS
FROM (
SELECT a.*,(CASE
WHEN COD_AJUSTE IN ('ELEC_TEXT','TFREPPVV_C') THEN (A.VALOR)*(R.COSTO) ELSE 0 END)
FROM Recargas_#MES AS A, BI_PRO_LISTA_COSTOS_RECARGAS AS R
WHERE R.ANO_MES = #MES
) D
GROUP BY CUENTA
END
Sample code:
-- Declare variables
DECLARE #MES DATETIME;
DECLARE #TSQL NVARCHAR(MAX);
-- Set the variable to valid statement
SET #TSQL = N'
SELECT CUENTA, SUM(COSTO_REC) AS COSTO_REC
INTO E09040_DEV.BI_PRO_COSTO_RECARGAS
FROM (
SELECT A.*,
(CASE
WHEN COD_AJUSTE IN (''ELEC_TEXT'',''TFREPPVV_C'') THEN
(A.VALOR)*(R.COSTO)
ELSE 0
END)
FROM
Recargas_' + REPLACE(CONVERT(CHAR(10), #MES, 101), '/', '') + ' AS A,
BI_PRO_LISTA_COSTOS_RECARGAS AS R
WHERE R.ANO_MES = ' + CONVERT(CHAR(10), #MES, 101) + '
) D
GROUP BY CUENTA'
-- Execute the statement
EXECUTE (#SQL)
Some things to note:
1 - I assume the table name has some type of extension that is a date? I used MM/DD/YYYY and removed the slashes as a format for the suffix.
2 - The WHERE clause will only work if you are not using the time part of the variable.
For instance, 03/15/2016 00:00:00 would be date without time entry. If not, you will have to use >= and < to grab all hours for a particular day.
3 - You are creating a table on the fly with this code. On the second execution, you will get a error unless you drop the table.
4 - You are not using the ON clause when joining table A to table R. To be ANSI compliant, move the WHERE clause to a ON clause.
5 - The actual calculation created by the CASE statement is not give a column name.
Issues 3 to 5 have to be solved on your end since I do not have the detailed business requirements.
Have Fun.
It should work using dynamic SQL to allow putting a dynamic table name:
DECLARE #SQL NVARCHAR(MAX) = N'
SELECT CUENTA, SUM(COSTO_REC) COSTO_REC
INTO E09040_DEV.BI_PRO_COSTO_RECARGAS
FROM (
SELECT a.*,(CASE
WHEN COD_AJUSTE IN (''ELEC_TEXT'',''TFREPPVV_C'') THEN (A.VALOR)*(R.COSTO) ELSE 0 END)
FROM Recargas_' + #MES + ' AS A, BI_PRO_LISTA_COSTOS_RECARGAS AS R
WHERE R.ANO_MES = ' + CAST(#MES AS VARCHAR(32)) + '
) D
GROUP BY CUENTA'
EXECUTE (#SQL)
Related
Please look at the below query..
select name as [Employee Name] from table name.
I want to generate [Employee Name] dynamically based on other column value.
Here is the sample table
s_dt dt01 dt02 dt03
2015-10-26
I want dt01 value to display as column name 26 and dt02 column value will be 26+1=27
I'm not sure if I understood you correctly. If I'am going into the wrong direction, please add comments to your question to make it more precise.
If you really want to create columns per sql you could try a variation of this script:
DECLARE #name NVARCHAR(MAX) = 'somename'
DECLARE #sql NVARCHAR(MAX) = 'ALTER TABLE aps.tbl_Fabrikkalender ADD '+#name+' nvarchar(10) NULL'
EXEC sys.sp_executesql #sql;
To retrieve the column name from another query insert the following between the above declares and fill the placeholders as needed:
SELECT #name = <some colum> FROM <some table> WHERE <some condition>
You would need to dynamically build the SQL as a string then execute it. Something like this...
DECLARE #s_dt INT
DECLARE #query NVARCHAR(MAX)
SET #s_dt = (SELECT DATEPART(dd, s_dt) FROM TableName WHERE 1 = 1)
SET #query = 'SELECT s_dt'
+ ', NULL as dt' + RIGHT('0' + CAST(#s_dt as VARCHAR), 2)
+ ', NULL as dt' + RIGHT('0' + CAST((#s_dt + 1) as VARCHAR), 2)
+ ', NULL as dt' + RIGHT('0' + CAST((#s_dt + 2) as VARCHAR), 2)
+ ', NULL as dt' + RIGHT('0' + CAST((#s_dt + 3) as VARCHAR), 2)
+ ' FROM TableName WHERE 1 = 1)
EXECUTE(#query)
You will need to replace WHERE 1 = 1 in two places above to select your data, also change TableName to the name of your table and it currently puts NULL as the dynamic column data, you probably want something else there.
To explain what it is doing:
SET #s_dt is selecting the date value from your table and returning only the day part as an INT.
SET #query is dynamically building your SELECT statement based on the day part (#s_dt).
Each line is taking #s_dt, adding 0, 1, 2, 3 etc, casting as VARCHAR, adding '0' to the left (so that it is at least 2 chars in length) then taking the right two chars (the '0' and RIGHT operation just ensure anything under 10 have a leading '0').
It is possible to do this using dynamic SQL, however I would also consider looking at the pivot operators to see if they can achieve what you are after a lot more efficiently.
https://technet.microsoft.com/en-us/library/ms177410(v=sql.105).aspx
I have a stored procedure (join two tables and select where condition #GID), a want to convert table result from rows to columns. I use a dynamic pivot query.
My stored procedure:
After I try using pivot
I want result like this:
GROUP_MOD_ID ADD EDIT DELETE ETC...
---------------------------------------
G02 1 1 0 ....
Can you give me some advice about this ?
Thank you.
It's because you're using the batch delimiter to separate your queries. This means the scope of #GID is incorrect. Remove the semi colon after:
DECLARE #pivot_cols NVARCHAR(MAX);
You don't need to use batch delimiters in this case. The logical flow of the procedure means you can omit them without any problems.
EDIT:
Here's the edited code that I've devised:
ALTER PROCEDURE GET_COLUMN_VALUE #GID CHAR(3)
AS
BEGIN
DECLARE #PivotCols NVARCHAR(MAX)
SELECT #PivotCols = STUFF((SELECT DISTINCT ' , ' + QUOTENAME(B.FUNCTION_MOD_NAME)
FROM FUNCTION_GROUP AS A
JOIN FUNCTION_MOD B
ON A.FUNCTION_MOD_ID = B.FUNCTION_MOD_ID
WHERE A.GROUP_MOD_ID = #GID
FOR XML PATH (' '), TYPE).value(' . ', 'NVARCHAR(MAX) '), 1, 1, ' ')
DECLARE #PivotQuery NVARCHAR(MAX)
SET #PivotQuery = '
;WITH CTE AS (
SELECT A.GROUP_MOD_ID, B.FUNCTION_MOD_NAME, CAST(ALLOW AS BIT) AS ALLOW
FROM FUNCTION_GROUP AS A
JOIN FUNCTION_MOD AS B
ON A.FUNCTION_MOD_ID = B.FUNCTION_MOD_ID)
SELECT GROUP_MOD_ID, '+#PivotCols+'
FROM CTE
PIVOT (MAX(ALLOW) FOR FUNCTION_MOD_NAME IN ('+#PivotCols')) AS PIV'
PRINT #PivotQuery
EXEC (#PivotQuery)
END
EDIT2:
You should execute this stored procedure like so:
EXEC GET_COLUMN_VALUE #GID='G02'
I'm trying to merge a very wide table from a source (linked Oracle server) to a target table (SQL Server 2012) w/o listing all the columns. Both tables are identical except for the records in them.
This is what I have been using:
TRUNCATE TABLE TargetTable
INSERT INTO TargetTable
SELECT *
FROM SourceTable
When/if I get this working I would like to make it a procedure so that I can pass into it the source, target and match key(s) needed to make the update. For now I would just love to get it to work at all.
USE ThisDatabase
GO
DECLARE
#Columns VARCHAR(4000) = (
SELECT COLUMN_NAME + ','
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'TargetTable'
FOR XML PATH('')
)
MERGE TargetTable AS T
USING (SELECT * FROM SourceTable) AS S
ON (T.ID = S.ID AND T.ROWVERSION = S.ROWVERSION)
WHEN MATCHED THEN
UPDATE SET #Columns = S.#Columns
WHEN NOT MATCHED THEN
INSERT (#Columns)
VALUES (S.#Columns)
Please excuse my noob-ness. I feel like I'm only half way there, but I don't understand some parts of SQL well enough to put it all together. Many thanks.
As previously mentioned in the answers, if you don't want to specify the columns , then you have to write a dynamic query.
Something like this in your case should help:
DECLARE
#Columns VARCHAR(4000) = (
SELECT COLUMN_NAME + ','
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'TargetTable'
FOR XML PATH('')
)
DECLARE #MergeQuery NVARCHAR(MAX)
DECLARE #UpdateQuery VARCHAR(MAX)
DECLARE #InsertQuery VARCHAR(MAX)
DECLARE #InsertQueryValues VARCHAR(MAX)
DECLARE #Col VARCHAR(200)
SET #UpdateQuery='Update Set '
SET #InsertQuery='Insert ('
SET #InsertQueryValues=' Values('
WHILE LEN(#Columns) > 0
BEGIN
SET #Col=left(#Columns, charindex(',', #Columns+',')-1);
IF #Col<> 'ID' AND #Col <> 'ROWVERSION'
BEGIN
SET #UpdateQuery= #UpdateQuery+ 'TargetTable.'+ #Col + ' = SourceTable.'+ #Col+ ','
SET #InsertQuery= #InsertQuery+#Col + ','
SET #InsertQueryValues=#InsertQueryValues+'SourceTable.'+ #Col+ ','
END
SET #Columns = stuff(#Columns, 1, charindex(',', #Columns+','), '')
END
SET #UpdateQuery=LEFT(#UpdateQuery, LEN(#UpdateQuery) - 1)
SET #InsertQuery=LEFT(#InsertQuery, LEN(#InsertQuery) - 1)
SET #InsertQueryValues=LEFT(#InsertQueryValues, LEN(#InsertQueryValues) - 1)
SET #InsertQuery=#InsertQuery+ ')'+ #InsertQueryValues +')'
SET #MergeQuery=
N'MERGE TargetTable
USING SourceTable
ON TargetTable.ID = SourceTable.ID AND TargetTable.ROWVERSION = SourceTable.ROWVERSION ' +
'WHEN MATCHED THEN ' + #UpdateQuery +
' WHEN NOT MATCHED THEN '+#InsertQuery +';'
Execute sp_executesql #MergeQuery
If you want more information about Merge, you could read the this excellent article
Don't feel bad. It takes time. Merge has interesting syntax. I've actually never used it. I read Microsoft's documentation on it, which is very helpful and even has examples. I think I covered everything. I think there may be a slight amount of tweaking you might have to do, but I think it should work.
Here's the documentation for MERGE:
https://msdn.microsoft.com/en-us/library/bb510625.aspx
As for your code, I commented pretty much everything to explain it and show you how to do it.
This part is to help write your merge statement
USE ThisDatabase --This says what datbase context to use.
--Pretty much what database your querying.
--Like this: database.schema.objectName
GO
DECLARE
#SetColumns VARCHAR(4000) = (
SELECT CONCAT(QUOTENAME(COLUMN_NAME),' = S.',QUOTENAME(COLUMN_NAME),',',CHAR(10)) --Concat just says concatenate these values. It's adds the strings together.
--QUOTENAME adds brackets around the column names
--CHAR(10) is a line break for formatting purposes(totally optional)
FROM INFORMATION_SCHEMA.COLUMNS
--WHERE TABLE_NAME = 'TargetTable'
FOR XML PATH('')
) --This uses some fancy XML trick to get your Columns concatenated into one row.
--What really is in your table is a column of your column names in different rows.
--BTW If the columns names in both tables are identical, then this will work.
DECLARE #Columns VARCHAR(4000) = (
SELECT QUOTENAME(COLUMN_NAME) + ','
FROM INFORMATION_SCHEMA.COLUMNS
--WHERE TABLE_NAME = 'TargetTable'
FOR XML PATH('')
)
SET #Columns = SUBSTRING(#Columns,0,LEN(#Columns)) -- this gets rid off the comma at the end of your list
SET #SetColumns = SUBSTRING(#SetColumns,0,LEN(#SetColumns)) --same thing here
SELECT #SetColumns --Your going to want to copy and paste this into your WHEN MATCHED statement
SELECT #Columns --Your going to want to copy this into your WHEN NOT MATCHED statement
GO
Merge Statement
Especially look at my notes on ROWVERSION.
MERGE INTO TargetTable AS T
USING SourceTable AS S --Don't really need to write SELECT * FROM since you need the whole table anyway
ON (T.ID = S.ID AND T.[ROWVERSION] = S.[ROWVERSION]) --These are your matching parameters
--One note on this, if ROWVERSION is different versions of the same data you don't want to have RowVersion here
--Like lets say you have ID 1 ROWVERSION 2 in your source but only version 1 in your targetTable
--If you leave T.ID =S.ID AND T.ROWVERSION = S.ROWVERSION, then it will insert the new ROWVERSION
--So you'll have two versions of ID 1
WHEN MATCHED THEN --When TargetTable ID and ROWVERSION match in the matching parameters
--Update the values in the TargetTable
UPDATE SET /*Copy and Paste #SetColumnss here*/
--Should look like this(minus the "--"):
--Col1 = S.Col1,
--Col2 = S.Col2,
--Col3 = S.Col3,
--Etc...
WHEN NOT MATCHED THEN --This says okay there are no rows with the existing ID, now insert a new row
INSERT (col1,col2,col3) --Copy and paste #Columns in between the parentheses. Should look like I show it. Note: This is insert into target table so your listing the target table columns
VALUES (col1,col2,col3) --Same thing here. This is the list of source table columns
I was looking at different ways of writing a stored procedure to return a "page" of data. This was for use with the ASP ObjectDataSource, but it could be considered a more general problem.
The requirement is to return a subset of the data based on the usual paging parameters; startPageIndex and maximumRows, but also a sortBy parameter to allow the data to be sorted. Also there are some parameters passed in to filter the data on various conditions.
One common way to do this seems to be something like this:
[Method 1]
;WITH stuff AS (
SELECT
CASE
WHEN #SortBy = 'Name' THEN ROW_NUMBER() OVER (ORDER BY Name)
WHEN #SortBy = 'Name DESC' THEN ROW_NUMBER() OVER (ORDER BY Name DESC)
WHEN #SortBy = ...
ELSE ROW_NUMBER() OVER (ORDER BY whatever)
END AS Row,
.,
.,
.,
FROM Table1
INNER JOIN Table2 ...
LEFT JOIN Table3 ...
WHERE ... (lots of things to check)
)
SELECT *
FROM stuff
WHERE (Row > #startRowIndex)
AND (Row <= #startRowIndex + #maximumRows OR #maximumRows <= 0)
ORDER BY Row
One problem with this is that it doesn't give the total count and generally we need another stored procedure for that. This second stored procedure has to replicate the parameter list and the complex WHERE clause. Not nice.
One solution is to append an extra column to the final select list, (SELECT COUNT(*) FROM stuff) AS TotalRows. This gives us the total but repeats it for every row in the result set, which is not ideal.
[Method 2]
An interesting alternative is given here (https://web.archive.org/web/20211020111700/https://www.4guysfromrolla.com/articles/032206-1.aspx) using dynamic SQL. He reckons that the performance is better because the CASE statement in the first solution drags things down. Fair enough, and this solution makes it easy to get the totalRows and slap it into an output parameter. But I hate coding dynamic SQL. All that 'bit of SQL ' + STR(#parm1) +' bit more SQL' gubbins.
[Method 3]
The only way I can find to get what I want, without repeating code which would have to be synchronized, and keeping things reasonably readable is to go back to the "old way" of using a table variable:
DECLARE #stuff TABLE (Row INT, ...)
INSERT INTO #stuff
SELECT
CASE
WHEN #SortBy = 'Name' THEN ROW_NUMBER() OVER (ORDER BY Name)
WHEN #SortBy = 'Name DESC' THEN ROW_NUMBER() OVER (ORDER BY Name DESC)
WHEN #SortBy = ...
ELSE ROW_NUMBER() OVER (ORDER BY whatever)
END AS Row,
.,
.,
.,
FROM Table1
INNER JOIN Table2 ...
LEFT JOIN Table3 ...
WHERE ... (lots of things to check)
SELECT *
FROM stuff
WHERE (Row > #startRowIndex)
AND (Row <= #startRowIndex + #maximumRows OR #maximumRows <= 0)
ORDER BY Row
(Or a similar method using an IDENTITY column on the table variable).
Here I can just add a SELECT COUNT on the table variable to get the totalRows and put it into an output parameter.
I did some tests and with a fairly simple version of the query (no sortBy and no filter), method 1 seems to come up on top (almost twice as quick as the other 2). Then I decided to test probably I needed the complexity and I needed the SQL to be in stored procedures. With this I get method 1 taking nearly twice as long as the other 2 methods. Which seems strange.
Is there any good reason why I shouldn't spurn CTEs and stick with method 3?
UPDATE - 15 March 2012
I tried adapting Method 1 to dump the page from the CTE into a temporary table so that I could extract the TotalRows and then select just the relevant columns for the resultset. This seemed to add significantly to the time (more than I expected). I should add that I'm running this on a laptop with SQL Server Express 2008 (all that I have available) but still the comparison should be valid.
I looked again at the dynamic SQL method. It turns out I wasn't really doing it properly (just concatenating strings together). I set it up as in the documentation for sp_executesql (with a parameter description string and parameter list) and it's much more readable. Also this method runs fastest in my environment. Why that should be still baffles me, but I guess the answer is hinted at in Hogan's comment.
I would most likely split the #SortBy argument into two, #SortColumn and #SortDirection, and use them like this:
…
ROW_NUMBER() OVER (
ORDER BY CASE #SortColumn
WHEN 'Name' THEN Name
WHEN 'OtherName' THEN OtherName
…
END *
CASE #SortDirection
WHEN 'DESC' THEN -1
ELSE 1
END
) AS Row
…
And this is how the TotalRows column could be defined (in the main select):
…
COUNT(*) OVER () AS TotalRows
…
I would definitely want to do a combination of a temp table and NTILE for this sort of approach.
The temp table will allow you to do your complicated series of conditions just once. Because you're only storing the pieces you care about, it also means that when you start doing selects against it further in the procedure, it should have a smaller overall memory usage than if you ran the condition multiple times.
I like NTILE() for this better than ROW_NUMBER() because it's doing the work you're trying to accomplish for you, rather than having additional where conditions to worry about.
The example below is one based off a similar query I'm using as part of a research query; I have an ID I can use that I know will be unique in the results. Using an ID that was an identity column would also be appropriate here, though.
--DECLARES here would be stored procedure parameters
declare #pagesize int, #sortby varchar(25), #page int = 1;
--Create temp with all relevant columns; ID here could be an identity PK to help with paging query below
create table #temp (id int not null primary key clustered, status varchar(50), lastname varchar(100), startdate datetime);
--Insert into #temp based off of your complex conditions, but with no attempt at paging
insert into #temp
(id, status, lastname, startdate)
select id, status, lastname, startdate
from Table1 ...etc.
where ...complicated conditions
SET #pagesize = 50;
SET #page = 5;--OR CAST(#startRowIndex/#pagesize as int)+1
SET #sortby = 'name';
--Only use the id and count to use NTILE
;with paging(id, pagenum, totalrows) as
(
select id,
NTILE((SELECT COUNT(*) cnt FROM #temp)/#pagesize) OVER(ORDER BY CASE WHEN #sortby = 'NAME' THEN lastname ELSE convert(varchar(10), startdate, 112) END),
cnt
FROM #temp
cross apply (SELECT COUNT(*) cnt FROM #temp) total
)
--Use the id to join back to main select
SELECT *
FROM paging
JOIN #temp ON paging.id = #temp.id
WHERE paging.pagenum = #page
--Don't need the drop in the procedure, included here for rerunnability
drop table #temp;
I generally prefer temp tables over table variables in this scenario, largely so that there are definite statistics on the result set you have. (Search for temp table vs table variable and you'll find plenty of examples as to why)
Dynamic SQL would be most useful for handling the sorting method. Using my example, you could do the main query in dynamic SQL and only pull the sort method you want to pull into the OVER().
The example above also does the total in each row of the return set, which as you mentioned was not ideal. You could, instead, have a #totalrows output variable in your procedure and pull it as well as the result set. That would save you the CROSS APPLY that I'm doing above in the paging CTE.
I would create one procedure to stage, sort, and paginate (using NTILE()) a staging table; and a second procedure to retrieve by page. This way you don't have to run the entire main query for each page.
This example queries AdventureWorks.HumanResources.Employee:
--------------------------------------------------------------------------
create procedure dbo.EmployeesByMartialStatus
#MaritalStatus nchar(1)
, #sort varchar(20)
as
-- Init staging table
if exists(
select 1 from sys.objects o
inner join sys.schemas s on s.schema_id=o.schema_id
and s.name='Staging'
and o.name='EmployeesByMartialStatus'
where type='U'
)
drop table Staging.EmployeesByMartialStatus;
-- Populate staging table with sort value
with s as (
select *
, sr=ROW_NUMBER()over(order by case #sort
when 'NationalIDNumber' then NationalIDNumber
when 'ManagerID' then ManagerID
-- plus any other sort conditions
else EmployeeID end)
from AdventureWorks.HumanResources.Employee
where MaritalStatus=#MaritalStatus
)
select *
into #temp
from s;
-- And now pages
declare #RowCount int; select #rowCount=COUNT(*) from #temp;
declare #PageCount int=ceiling(#rowCount/20); --assuming 20 lines/page
select *
, Page=NTILE(#PageCount)over(order by sr)
into Staging.EmployeesByMartialStatus
from #temp;
go
--------------------------------------------------------------------------
-- procedure to retrieve selected pages
create procedure EmployeesByMartialStatus_GetPage
#page int
as
declare #MaxPage int;
select #MaxPage=MAX(Page) from Staging.EmployeesByMartialStatus;
set #page=case when #page not between 1 and #MaxPage then 1 else #page end;
select EmployeeID,NationalIDNumber,ContactID,LoginID,ManagerID
, Title,BirthDate,MaritalStatus,Gender,HireDate,SalariedFlag,VacationHours,SickLeaveHours
, CurrentFlag,rowguid,ModifiedDate
from Staging.EmployeesByMartialStatus
where Page=#page
GO
--------------------------------------------------------------------------
-- Usage
-- Load staging
exec dbo.EmployeesByMartialStatus 'M','NationalIDNumber';
-- Get pages 1 through n
exec dbo.EmployeesByMartialStatus_GetPage 1;
exec dbo.EmployeesByMartialStatus_GetPage 2;
-- ...etc (this would actually be a foreach loop, but that detail is omitted for brevity)
GO
I use this method of using EXEC():
-- SP parameters:
-- #query: Your query as an input parameter
-- #maximumRows: As number of rows per page
-- #startPageIndex: As number of page to filter
-- #sortBy: As a field name or field names with supporting DESC keyword
DECLARE #query nvarchar(max) = 'SELECT * FROM sys.Objects',
#maximumRows int = 8,
#startPageIndex int = 3,
#sortBy as nvarchar(100) = 'name Desc'
SET #query = ';WITH CTE AS (' + #query + ')' +
'SELECT *, (dt.pagingRowNo - 1) / ' + CAST(#maximumRows as nvarchar(10)) + ' + 1 As pagingPageNo' +
', pagingCountRow / ' + CAST(#maximumRows as nvarchar(10)) + ' As pagingCountPage ' +
', (dt.pagingRowNo - 1) % ' + CAST(#maximumRows as nvarchar(10)) + ' + 1 As pagingRowInPage ' +
'FROM ( SELECT *, ROW_NUMBER() OVER (ORDER BY ' + #sortBy + ') As pagingRowNo, COUNT(*) OVER () AS pagingCountRow ' +
'FROM CTE) dt ' +
'WHERE (dt.pagingRowNo - 1) / ' + CAST(#maximumRows as nvarchar(10)) + ' + 1 = ' + CAST(#startPageIndex as nvarchar(10))
EXEC(#query)
At result-set after query result columns:
Note:
I add some extra columns that you can remove them:
pagingRowNo : The row number
pagingCountRow : The total number of rows
pagingPageNo : The current page number
pagingCountPage : The total number of pages
pagingRowInPage : The row number that started with 1 in this page
I have following query which takes 2 parameters.
YearNumber
MonthNumber
In my pivot query, I am trying to select columns based on #Year_Rtl variable. I need to select data for the year passed, last year and last last year. Since the data being displayed on UI is table format divided by #Year_Rtl, I decided to write a pivot query for that as below.
In the query, it works fine if I hard code [#Year_Rtl], [#Year_Rtl - 1], [#Year_Rtl - 2] to [2012], [2011], [2010]. But since the year passed can be anything, I want columns to be named dynamically.
DECLARE #Month_Rtl int
DECLARE #Year_Rtl int
SET #Year_Rtl = 2012
SET #Month_Rtl = 1
SELECT
'Data 1', [#Year_Rtl], [#Year_Rtl - 1], [#Year_Rtl - 2]
FROM
(SELECT [Yr_No], Qty
FROM dbo.Table1 t
WHERE (t.Col1 = 10) AND
(t.Col2 = '673') AND
((t.Mth_No = #Month_Rtl AND t.Yr_No = #Year_Rtl) OR
(t.Mth_No = 12 AND t.Yr_No IN (#Year_Rtl - 1, #Year_Rtl - 2)))
) p PIVOT (SUM(Qty)
FOR [Yr_No] IN ([#Year_Rtl], [#Year_Rtl-1], [#Year_Rtl-2])
) AS pvt
Above query throws following errors:
Error converting data type nvarchar to smallint.
The incorrect value "#Year_Rtl" is supplied in the PIVOT operator.
Invalid column name '#Year_Rtl - 1'.
Invalid column name '#Year_Rtl - 2'.
Since you can use dynamic SQL, I'd go with a macro-replacement approach. You're identifying areas of the query that must be dynamically replaced with placeholders (e.g. $$Year_Rtl) and then calculating their replacement values below. I find that it keeps the SQL statement easy to follow.
DECLARE #SQL NVarChar(2000);
SELECT #SQL = N'
SELECT
''Data 1'', [$$Year_Rtl], [$$Year_RtlM1], [$$Year_RtlM2]
FROM
(SELECT [Yr_No], Qty
FROM dbo.Table1 t
WHERE (t.Col1 = 10) AND
(t.Col2 = ''673'') AND
((t.Mth_No = $$Month_Rtl AND t.Yr_No = $$Year_Rtl) OR
(t.Mth_No = 12 AND t.Yr_No IN ($$Year_RtlM1, $$Year_RtlM2)))
) p PIVOT (SUM(Qty)
FOR [Yr_No] IN ([$$Year_Rtl], [$$Year_RtlM1], [$$Year_RtlM2])
) AS pvt';
SELECT #SQL = REPLACE(#SQL, '$$Year_RtlM2', #Year_Rtl - 2);
SELECT #SQL = REPLACE(#SQL, '$$Year_RtlM1', #Year_Rtl - 1);
SELECT #SQL = REPLACE(#SQL, '$$Year_Rtl', #Year_Rtl);
SELECT #SQL = REPLACE(#SQL, '$$Month_Rtl', #Month_Rtl);
PRINT #SQL;
-- Uncomment the next line to allow the built query to execute...
--EXECUTE sp_ExecuteSQL #SQL;
Since consuming code will also have to be flaky under this scheme (e.g. selecting columns based on "position" rather than name) - why not normalize the columns by performing a DATEDIFF(year,Yr_No,#Year_Rtl), and work from there? Those columns will always be 0, -1 and -2...
You need to look into Dynamic SQL Pivoting.
I recommend reading Itzik Ben-Gan's T-SQL Fundamentals where he goes over how to do this.
Alternatively try this article if you don't want to buy the book.
Maybe this will help:
First getting the columns with a tally function like this:
DECLARE #Month_Rtl int,
#Year_Rtl int,
#Year_Rtl_Start INT,
#cols VARCHAR(MAX),
#values VARCHAR(MAX)
SET #Year_Rtl = 2012
SET #Month_Rtl = 1
SET #Year_Rtl_Start=2009
;WITH Years ( n ) AS (
SELECT #Year_Rtl_Start UNION ALL
SELECT 1 + n FROM Years WHERE n < #Year_Rtl )
SELECT
#cols = COALESCE(#cols + ','+QUOTENAME(n),
QUOTENAME(n)),
#values = COALESCE(#values + ','+CAST(n AS VARCHAR(100)),
CAST(n AS VARCHAR(100)))
FROM
Years
ORDER BY n DESC
The variable #cols contains the columns that is in the pivot and the variable #values contains the years for the IN. The #Year_Rtl is the end year and the #Year_Rtl_Start is the start for you range.
Then declaring and executing the dynamic pivot like this:
DECLARE #query NVARCHAR(4000)=
N'SELECT
''Data 1'', '+#cols+'
FROM
(
SELECT
[Yr_No], Qty
FROM
dbo.Table1 t
WHERE
t.Col1 = 10
AND t.Col2 = ''673''
AND
(
(
t.Mth_No = '+CAST(#Month_Rtl AS VARCHAR(10))+'
AND t.Yr_No = '+CAST(#Year_Rtl AS VARCHAR(10))+'
)
OR
(
t.Mth_No = 12
AND t.Yr_No IN ('+#values+'))
)
) p
PIVOT
(
SUM(Qty)
FOR [Yr_No] IN ('+#cols+')
) AS pvt'
EXECUTE(#query)