SQL- COUNT all Rows with WHERE condition in all tables in database - sql-server

I need a query that COUNT all row´s (with WHERE condition) in all tables in a database, where the table_names are unknown.
Reason why is: I got a "alarm_logging_system" that creates a new "archive" table every month or so, and automatically names it "AlarmLog(timestamp of last active alarm)"
So I need a query which dynamically search through all tables that exists in the database, COUNTS all row in a column with WHERE condition, and return one single value.
In example I want to get all active alarms, this month, last month, etc. Or a specific time range.
This is an example of a query I wrote to get the count for all active alarms last month
SELECT COUNT(ConditionActive)
FROM (whole database)???
WHERE (Acked=0 AND ConditionActive = 1)
AND (EventTime >= DATEADD(m,-1,DATEADD(mm, DATEDIFF(m,0,GETDATE()), 0))
AND [EventTime] <= DATEADD(d,-1,DATEADD(mm, DATEDIFF(m,0,GETDATE()),0))))
AS ACTIVE_LAST_MONTH
So what I need is a query, stored_procedure or a dynamic SQL query?
All the tables have the same schema and columns.
Appreciate all help!

This should demonstrate why it is not generally considered a good practice to make multiple copies of the same table and then some aggregate data from the whole collection. This just isn't how relational databases are designed to work.
This is untested because I don't have your table anywhere to work with but this should get it.
declare #SQL nvarchar(max) = ''
--This get a result set of the count for all tables.
select #SQL = #SQL + 'select count(ConditionActive) as MyCount
from [' + t.name + ']
where Acked = 0
AND ConditionActive = 1
AND EventTime >= DATEADD(month, -1, DATEADD(month, DATEDIFF(month, 0, GETDATE()), 0))
and [EventTime] <= DATEADD(day, -1, DATEADD(month, DATEDIFF(month, 0, GETDATE()),0)) UNION ALL'
from sys.tables t
--Now we need the sum of all counts
select #SQL = 'Select sum(MyCount) from (' + LEFT(#SQL, LEN(#SQL) - 10) + ') as x'
select #SQL
--uncomment the line below when you are confident that the dynamic sql is correct.
--exec sp_executesql #SQL
--EDIT--
I took the liberty of expanding the shortcuts in your DATEADD functions. The shortcuts are hard to remember and your were using both mm and m which both mean month. It is generally a better approach to just spell out the word to remove any ambiguity.

Related

Saving table stats into a table, including the table name

I'm running into a wall compiling row updates and new rows in a few tables to save off in another table for trending. I know a cursor could achieve this pretty easily and I get a result set, but I'm struggling to figure out how to get these into a table with the cursor (or if I should approach it completely differently.
Background
I want to calculate and save off daily the number of new and edited rows daily from several tables of interest in a production database. These tables' rows are timestamped with the last edit.
My stats database that contains a tablestats table that will house the information for each table across 6 columns. My goal in mind is to run an Agent job daily to count the prior day's timestamps, the delta between today's rowcount and the prior day's rowcount, and then merge those into tablestats.
Something like this:
tablename
updyear
updmonth
updday
rowupdates
newrows
table_1
2023
2
5
2509
34
table_1
2023
2
6
3443
90
table_2
2023
2
5
834
255
table_2
2023
2
6
544
433
With that, I can trend/pivot the data as needed.
What I tried
I figured a cursor would in part be the best approach since I was having trouble condensing the query's results with the name of the table I'm pulling from. I adapted this question & answers to get part of the way there, but I'm struggling with how to take the next step. I abbreviated the below code for legibility:
DECLARE #last_upd nvarchar(MAX) = '';
DECLARE #checkdate date = DATEADD(DAY, -1, GETDATE());
SELECT #last_upd = #last_upd + 'SELECT '''
+QUOTENAME(name)
+''',YEAR(last_upd) as updyear /* month, etc. */,COUNT(last_upd) as rowupdates FROM '
+ QUOTENAME(name)
+ ' WHERE last_upd > #checkdate /* GROUP BY year/month/day*/; '
FROM sys.tables
WHERE (name IN ('table_1','table_2','table_3'))
IF ##ROWCOUNT > 0
EXEC sp_executesql #last_upd
, N'#checkdate date'
, #checkdate'
Which returns the following:
Query 1
updyear
updmonth
updday
rowupdates
table_1
2023
2
5
table_1
2023
2
6
Query 2
updyear
updmonth
updday
rowupdates
table_2
2023
2
5
table_2
2023
2
6
Query 3, etc.
Since it returns as 3 separate queries, I'm unsure how to get that into a merge statement, since I can't SELECT * INTO #temptable with these.
The reason I'm interested in merge even though it's a daily run is to accomodate any potential conflicts with existing data. I haven't gotten to the point of doing a rowcount but assume at worst, I could do a second cursor with the rowcount prior to rolling it up into a stored procedure.
What you really want is a UNION ALL to combine the results from the various queries into a single result set. If you change your dynamic SELECT to a UNION ALL SELECT, you are most of the way there. What's left is to strip the leading occurence using something like SET #last_upd = STUFF(#last_upd, 1, 10, ''), which replaces the first 10 characters with nothing.
If you include a newline immediately after the opening quote of your dynamic statement, the generated SQL will look a lot nicer when you print it out during debugging.
It is also common to now use STRING_AGG() to to combine generated code snippets when generating dynamic SQL, but your approach works so I'll leave it.
For the table name column in the result, you can use QUOTENAME(..., '''') to safely stringify the name inside single quotes instead of [].
The updated code would be something like:
DECLARE #last_upd nvarchar(MAX) = '';
DECLARE #checkdate date = DATEADD(DAY, -1, GETDATE());
SELECT #last_upd = #last_upd + '
UNION ALL
SELECT '
+ QUOTENAME(name, '''')
+ ',YEAR(last_upd) as updyear /* month, etc. */,COUNT(last_upd) as rowupdates FROM '
+ QUOTENAME(name)
+ ' WHERE last_upd > #checkdate GROUP BY YEAR(last_upd) /* year/month/day*/ '
FROM sys.tables
WHERE (name IN ('table_1','table_2','table_3'))
SET #last_upd = STUFF(#last_upd, 1, 10, '')
SELECT #last_upd
IF ##ROWCOUNT > 0
EXEC sp_executesql #last_upd
, N'#checkdate date'
, #checkdate
Generated SQL:
SELECT 'table_1',YEAR(last_upd) as updyear /* month, etc. */,COUNT(last_upd) as rowupdates FROM [table_1] WHERE last_upd > #checkdate GROUP BY YEAR(last_upd) /* year/month/day*/
UNION ALL
SELECT 'table_2',YEAR(last_upd) as updyear /* month, etc. */,COUNT(last_upd) as rowupdates FROM [table_2] WHERE last_upd > #checkdate GROUP BY YEAR(last_upd) /* year/month/day*/
UNION ALL
SELECT 'table_3',YEAR(last_upd) as updyear /* month, etc. */,COUNT(last_upd) as rowupdates FROM [table_3] WHERE last_upd > #checkdate GROUP BY YEAR(last_upd) /* year/month/day*/
Results:
(No column name)
updyear
rowupdates
[table_1]
2023
2
[table_2]
2023
1
[table_3]
2023
3
See this db<>fiddle
I'll leave it to you to finish up the details to get your complete desired result.

How do I use ##RowCount in a stored procedure, against rows in another table to work out the percentage?

Firstly, may I state that I'm aware of the ability to, e.g., create a new function, declare variables for rowcount1 and rowcount2, run a stored procedure that returns a subset of rows from a table, then determine the entire rowcount for that same table, assign it to the second variable and then 1 / 2 x 100....
However, is there a cleaner way to do this which doesn't result in numerous running of things like this stored procedure? Something like
select (count(*stored procedure name*) / select count(*) from table) x 100) as Percentage...
Sorry for the crap scenario!
EDIT: Someone has asked for more details. Ultimately, and to cut a very long story short, I wish to know what people would consider the quickest and most processor-concise method there would be to show the percentage of rows that are returned in the stored procedure, from ALL rows available in that table. Does that make more sense?
The code in the stored procedure is below:
SET #SQL = 'SELECT COUNT (DISTINCT c.ElementLabel), r.FirstName, r.LastName, c.LastReview,
CASE
WHEN c.LastReview < DateAdd(month, -1, GetDate()) THEN ''OUT of Date''
WHEN c.LastReview >= DateAdd(month, -1, GetDate()) THEN ''In Date''
WHEN c.LastReview is NULL THEN ''Not Yet Reviewed'' END as [Update Status]
FROM [Residents-'+#home_name+'] r
LEFT JOIN [CarePlans-'+#home_name+'] c ON r.PersonID = c.PersonID
WHERE r.Location = '''+#home_name+'''
AND CarePlanType = 0
GROUP BY r.LastName, r.FirstName, c.LastReview
HAVING COUNT(ELEMENTLABEL) >= 14
Thanks
Ant
I could not tell from your question if you are attempting to get the count and the result set in one query. If it is ok to execute the SP and separately calculate a table count then you could store the results of the stored procedure into a temp table.
CREATE TABLE #Results(ID INT, Value INT)
INSERT #Results EXEC myStoreProc #Parameter1, #Parameter2
SELECT
Result = ((SELECT COUNT(*) FROM #Results) / (select count(*) from table))* 100

Conversion failed when converting date and/or time from character string - SQL Server error

I'm trying to dynamically select tables from my database based on the table name, which in turn is based on the date of creation.
For example, the tables might be called 'tableA20110305', or 'tableB20110305', indicating that the tables were created on 05 March 2011.
I'm trying to write a query that will select all tables named thus, created before a certain cutoff date (1 year ago), and concatenate them into DROP TABLE command statements in a table variable. The select statement looks like this.
DECLARE #cutoffDate datetime = CONVERT(DATETIME, DATEADD(YEAR,-1,GETDATE()), 112)
SELECT 'DROP TABLE "' + TABLE_NAME + '"' AS 'Command'
FROM INFORMATION_SCHEMA.TABLES
WHERE (TABLE_NAME LIKE 'tableA%' OR TABLE_NAME LIKE 'tableB%')
AND (CONVERT(DATETIME, SUBSTRING(TABLE_NAME, 7, 8), 112) < #cutoffDate)
ORDER BY Command DESC
However, when I execute this SQL, I'm seeing the following error:
Msg 241, Level 16, State 1, Line 14
Conversion failed when converting date and/or time from character string.
BUT... if I execute the following SQL statement, I see no error and get date returned as expected:
SELECT CONVERT(DATETIME, SUBSTRING('tableA20110305', 7, 8), 112)
I don't understand why these queries are not returning the same result or where this error is coming from.
I'd very much appreciate any insights..
This explains this behavior very well. Taken from 70-461: Querying Microsoft SQL Server 2012:
WHERE propertytype = 'INT' AND CAST(propertyval AS INT) > 10
Suppose
that the table being queried holds different property values. The
propertytype column represents the type of the property (an INT, a
DATE, and so on), and the propertyval column holds the value in a
character string. When propertytype is 'INT', the value in propertyval
is convertible to INT; otherwise, not necessarily.
Some assume that
unless precedence rules dictate otherwise, predicates will be
evaluated from left to right, and that short circuiting will take
place when possible. In other words, if the first predicate
propertytype = 'INT' evaluates to false, SQL Server won’t evaluate the
second predicate CAST(propertyval AS INT) > 10 because the result is
already known. Based on this assumption, the expectation is that the
query should never fail trying to convert something that isn’t
convertible.
The reality, though, is different. SQL Server does
internally support a short-circuit concept; however, due to the
all-at-once concept in the language, it is not necessarily going to
evaluate the expressions in left-to-right order. It could decide,
based on cost-related reasons, to start with the second expression,
and then if the second expression evaluates to true, to evaluate the
first expression as well. This means that if there are rows in the
table where propertytype is different than 'INT', and in those rows
propertyval isn’t convertible to INT, the query can fail due to a
conversion error.
And in your case engine decides first to do filter by dates part and it fails.
And there can be several workaround:
Use TRY_CAST instead(supported from SQL Server 2012)
First select all tables which are like 'tableA%' OR TABLE_NAME LIKE 'tableB%' into some temp table and then do another filter (CONVERT(DATETIME, SUBSTRING(TABLE_NAME, 7, 8), 112) < #cutoffDate)
Well , as mentioned in the comments you probably have other tables in your database that does not follow the same format as tableA<DateFormat> , so you need to try to filter only them .
You can use ISDATE combined with CASE EXPRESSION to make sure the SUBSTRING is indeed in a date format:
DECLARE #cutoffDate datetime = CONVERT(DATETIME, DATEADD(YEAR,-1,GETDATE()), 112)
SELECT 'DROP TABLE "' + TABLE_NAME + '"' AS 'Command'
FROM INFORMATION_SCHEMA.TABLES
WHERE (TABLE_NAME LIKE 'tableA%' OR TABLE_NAME LIKE 'tableB%')
AND CASE WHEN ISDATE(SUBSTRING(TABLE_NAME, 7, 8)) = 1
THEN (CONVERT(DATETIME, SUBSTRING(TABLE_NAME, 7, 8), 112)
ELSE getdate()
END < #cutoffDate
ORDER BY Command DESC
DECLARE #cutoffDate Varchar(8); --<-- use varchar here not datetime since you YYYYMMDD
SET #cutoffDate = CONVERT(Varchar(8), DATEADD(YEAR,-1,GETDATE()), 112)
SELECT 'DROP TABLE '+ QUOTENAME(TABLE_SCHEMA) +'.' + QUOTENAME(TABLE_NAME) AS [Command]
From (
Select TABLE_SCHEMA , TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE (TABLE_NAME LIKE 'tableA%' OR TABLE_NAME LIKE 'tableB%')
AND ISDATE(SUBSTRING(TABLE_NAME, 7, 8)) = 1
) A
Where (CONVERT(DATETIME, SUBSTRING(TABLE_NAME, 7, 8)) < #cutoffDate)
ORDER BY Command DESC
Adding ISDATE(SUBSTRING(TABLE_NAME, 7, 8)) = 1 to your where clause will only bring back the results which has a proper date value in its name hence converting it to date/datetime should work.
The problem is that some table matches the condition but does not have the prescribed format. In SQL Server 2012+, you can use try_convert():
SELECT 'DROP TABLE "' + TABLE_NAME + '"' AS 'Command'
FROM INFORMATION_SCHEMA.TABLES
WHERE (TABLE_NAME LIKE 'tableA%' OR TABLE_NAME LIKE 'tableB%') AND
(TRY_CONVERT(DATETIME, SUBSTRING(TABLE_NAME, 7, 8), 112) < #cutoffDate)
ORDER BY Command DESC;
In earlier versions, you might as well use string comparisons:
SELECT 'DROP TABLE "' + TABLE_NAME + '"' AS 'Command'
FROM INFORMATION_SCHEMA.TABLES
WHERE (TABLE_NAME LIKE 'tableA%' OR TABLE_NAME LIKE 'tableB%') AND
(SUBSTRING(TABLE_NAME, 7, 8), 112) < CONVERT(VARCHAR(8), #cutoffDate, 112))
ORDER BY Command DESC;
This converts the cutoff date to a string in the format of YYYYMMDD, which is fine for this comparison. However, you do need to be careful about the values that do not match the specific format -- this might accidentally delete a table that you don't intend to delete.

Change a table name in SQL Server procedure

I want this procedure change the table name when I execute it.
The table name that I want to change is Recargas_#mes
There is some way to do that?
#MES DATETIME
AS
BEGIN
SELECT CUENTA, SUM(COSTO_REC) COSTO_REC
INTO E09040_DEV.BI_PRO_COSTO_RECARGAS
FROM (
SELECT a.*,(CASE
WHEN COD_AJUSTE IN ('ELEC_TEXT','TFREPPVV_C') THEN (A.VALOR)*(R.COSTO) ELSE 0 END)
FROM Recargas_#MES AS A, BI_PRO_LISTA_COSTOS_RECARGAS AS R
WHERE R.ANO_MES = #MES
) D
GROUP BY CUENTA
END
Sample code:
-- Declare variables
DECLARE #MES DATETIME;
DECLARE #TSQL NVARCHAR(MAX);
-- Set the variable to valid statement
SET #TSQL = N'
SELECT CUENTA, SUM(COSTO_REC) AS COSTO_REC
INTO E09040_DEV.BI_PRO_COSTO_RECARGAS
FROM (
SELECT A.*,
(CASE
WHEN COD_AJUSTE IN (''ELEC_TEXT'',''TFREPPVV_C'') THEN
(A.VALOR)*(R.COSTO)
ELSE 0
END)
FROM
Recargas_' + REPLACE(CONVERT(CHAR(10), #MES, 101), '/', '') + ' AS A,
BI_PRO_LISTA_COSTOS_RECARGAS AS R
WHERE R.ANO_MES = ' + CONVERT(CHAR(10), #MES, 101) + '
) D
GROUP BY CUENTA'
-- Execute the statement
EXECUTE (#SQL)
Some things to note:
1 - I assume the table name has some type of extension that is a date? I used MM/DD/YYYY and removed the slashes as a format for the suffix.
2 - The WHERE clause will only work if you are not using the time part of the variable.
For instance, 03/15/2016 00:00:00 would be date without time entry. If not, you will have to use >= and < to grab all hours for a particular day.
3 - You are creating a table on the fly with this code. On the second execution, you will get a error unless you drop the table.
4 - You are not using the ON clause when joining table A to table R. To be ANSI compliant, move the WHERE clause to a ON clause.
5 - The actual calculation created by the CASE statement is not give a column name.
Issues 3 to 5 have to be solved on your end since I do not have the detailed business requirements.
Have Fun.
It should work using dynamic SQL to allow putting a dynamic table name:
DECLARE #SQL NVARCHAR(MAX) = N'
SELECT CUENTA, SUM(COSTO_REC) COSTO_REC
INTO E09040_DEV.BI_PRO_COSTO_RECARGAS
FROM (
SELECT a.*,(CASE
WHEN COD_AJUSTE IN (''ELEC_TEXT'',''TFREPPVV_C'') THEN (A.VALOR)*(R.COSTO) ELSE 0 END)
FROM Recargas_' + #MES + ' AS A, BI_PRO_LISTA_COSTOS_RECARGAS AS R
WHERE R.ANO_MES = ' + CAST(#MES AS VARCHAR(32)) + '
) D
GROUP BY CUENTA'
EXECUTE (#SQL)

Paging, sorting and filtering in a stored procedure (SQL Server)

I was looking at different ways of writing a stored procedure to return a "page" of data. This was for use with the ASP ObjectDataSource, but it could be considered a more general problem.
The requirement is to return a subset of the data based on the usual paging parameters; startPageIndex and maximumRows, but also a sortBy parameter to allow the data to be sorted. Also there are some parameters passed in to filter the data on various conditions.
One common way to do this seems to be something like this:
[Method 1]
;WITH stuff AS (
SELECT
CASE
WHEN #SortBy = 'Name' THEN ROW_NUMBER() OVER (ORDER BY Name)
WHEN #SortBy = 'Name DESC' THEN ROW_NUMBER() OVER (ORDER BY Name DESC)
WHEN #SortBy = ...
ELSE ROW_NUMBER() OVER (ORDER BY whatever)
END AS Row,
.,
.,
.,
FROM Table1
INNER JOIN Table2 ...
LEFT JOIN Table3 ...
WHERE ... (lots of things to check)
)
SELECT *
FROM stuff
WHERE (Row > #startRowIndex)
AND (Row <= #startRowIndex + #maximumRows OR #maximumRows <= 0)
ORDER BY Row
One problem with this is that it doesn't give the total count and generally we need another stored procedure for that. This second stored procedure has to replicate the parameter list and the complex WHERE clause. Not nice.
One solution is to append an extra column to the final select list, (SELECT COUNT(*) FROM stuff) AS TotalRows. This gives us the total but repeats it for every row in the result set, which is not ideal.
[Method 2]
An interesting alternative is given here (https://web.archive.org/web/20211020111700/https://www.4guysfromrolla.com/articles/032206-1.aspx) using dynamic SQL. He reckons that the performance is better because the CASE statement in the first solution drags things down. Fair enough, and this solution makes it easy to get the totalRows and slap it into an output parameter. But I hate coding dynamic SQL. All that 'bit of SQL ' + STR(#parm1) +' bit more SQL' gubbins.
[Method 3]
The only way I can find to get what I want, without repeating code which would have to be synchronized, and keeping things reasonably readable is to go back to the "old way" of using a table variable:
DECLARE #stuff TABLE (Row INT, ...)
INSERT INTO #stuff
SELECT
CASE
WHEN #SortBy = 'Name' THEN ROW_NUMBER() OVER (ORDER BY Name)
WHEN #SortBy = 'Name DESC' THEN ROW_NUMBER() OVER (ORDER BY Name DESC)
WHEN #SortBy = ...
ELSE ROW_NUMBER() OVER (ORDER BY whatever)
END AS Row,
.,
.,
.,
FROM Table1
INNER JOIN Table2 ...
LEFT JOIN Table3 ...
WHERE ... (lots of things to check)
SELECT *
FROM stuff
WHERE (Row > #startRowIndex)
AND (Row <= #startRowIndex + #maximumRows OR #maximumRows <= 0)
ORDER BY Row
(Or a similar method using an IDENTITY column on the table variable).
Here I can just add a SELECT COUNT on the table variable to get the totalRows and put it into an output parameter.
I did some tests and with a fairly simple version of the query (no sortBy and no filter), method 1 seems to come up on top (almost twice as quick as the other 2). Then I decided to test probably I needed the complexity and I needed the SQL to be in stored procedures. With this I get method 1 taking nearly twice as long as the other 2 methods. Which seems strange.
Is there any good reason why I shouldn't spurn CTEs and stick with method 3?
UPDATE - 15 March 2012
I tried adapting Method 1 to dump the page from the CTE into a temporary table so that I could extract the TotalRows and then select just the relevant columns for the resultset. This seemed to add significantly to the time (more than I expected). I should add that I'm running this on a laptop with SQL Server Express 2008 (all that I have available) but still the comparison should be valid.
I looked again at the dynamic SQL method. It turns out I wasn't really doing it properly (just concatenating strings together). I set it up as in the documentation for sp_executesql (with a parameter description string and parameter list) and it's much more readable. Also this method runs fastest in my environment. Why that should be still baffles me, but I guess the answer is hinted at in Hogan's comment.
I would most likely split the #SortBy argument into two, #SortColumn and #SortDirection, and use them like this:
…
ROW_NUMBER() OVER (
ORDER BY CASE #SortColumn
WHEN 'Name' THEN Name
WHEN 'OtherName' THEN OtherName
…
END *
CASE #SortDirection
WHEN 'DESC' THEN -1
ELSE 1
END
) AS Row
…
And this is how the TotalRows column could be defined (in the main select):
…
COUNT(*) OVER () AS TotalRows
…
I would definitely want to do a combination of a temp table and NTILE for this sort of approach.
The temp table will allow you to do your complicated series of conditions just once. Because you're only storing the pieces you care about, it also means that when you start doing selects against it further in the procedure, it should have a smaller overall memory usage than if you ran the condition multiple times.
I like NTILE() for this better than ROW_NUMBER() because it's doing the work you're trying to accomplish for you, rather than having additional where conditions to worry about.
The example below is one based off a similar query I'm using as part of a research query; I have an ID I can use that I know will be unique in the results. Using an ID that was an identity column would also be appropriate here, though.
--DECLARES here would be stored procedure parameters
declare #pagesize int, #sortby varchar(25), #page int = 1;
--Create temp with all relevant columns; ID here could be an identity PK to help with paging query below
create table #temp (id int not null primary key clustered, status varchar(50), lastname varchar(100), startdate datetime);
--Insert into #temp based off of your complex conditions, but with no attempt at paging
insert into #temp
(id, status, lastname, startdate)
select id, status, lastname, startdate
from Table1 ...etc.
where ...complicated conditions
SET #pagesize = 50;
SET #page = 5;--OR CAST(#startRowIndex/#pagesize as int)+1
SET #sortby = 'name';
--Only use the id and count to use NTILE
;with paging(id, pagenum, totalrows) as
(
select id,
NTILE((SELECT COUNT(*) cnt FROM #temp)/#pagesize) OVER(ORDER BY CASE WHEN #sortby = 'NAME' THEN lastname ELSE convert(varchar(10), startdate, 112) END),
cnt
FROM #temp
cross apply (SELECT COUNT(*) cnt FROM #temp) total
)
--Use the id to join back to main select
SELECT *
FROM paging
JOIN #temp ON paging.id = #temp.id
WHERE paging.pagenum = #page
--Don't need the drop in the procedure, included here for rerunnability
drop table #temp;
I generally prefer temp tables over table variables in this scenario, largely so that there are definite statistics on the result set you have. (Search for temp table vs table variable and you'll find plenty of examples as to why)
Dynamic SQL would be most useful for handling the sorting method. Using my example, you could do the main query in dynamic SQL and only pull the sort method you want to pull into the OVER().
The example above also does the total in each row of the return set, which as you mentioned was not ideal. You could, instead, have a #totalrows output variable in your procedure and pull it as well as the result set. That would save you the CROSS APPLY that I'm doing above in the paging CTE.
I would create one procedure to stage, sort, and paginate (using NTILE()) a staging table; and a second procedure to retrieve by page. This way you don't have to run the entire main query for each page.
This example queries AdventureWorks.HumanResources.Employee:
--------------------------------------------------------------------------
create procedure dbo.EmployeesByMartialStatus
#MaritalStatus nchar(1)
, #sort varchar(20)
as
-- Init staging table
if exists(
select 1 from sys.objects o
inner join sys.schemas s on s.schema_id=o.schema_id
and s.name='Staging'
and o.name='EmployeesByMartialStatus'
where type='U'
)
drop table Staging.EmployeesByMartialStatus;
-- Populate staging table with sort value
with s as (
select *
, sr=ROW_NUMBER()over(order by case #sort
when 'NationalIDNumber' then NationalIDNumber
when 'ManagerID' then ManagerID
-- plus any other sort conditions
else EmployeeID end)
from AdventureWorks.HumanResources.Employee
where MaritalStatus=#MaritalStatus
)
select *
into #temp
from s;
-- And now pages
declare #RowCount int; select #rowCount=COUNT(*) from #temp;
declare #PageCount int=ceiling(#rowCount/20); --assuming 20 lines/page
select *
, Page=NTILE(#PageCount)over(order by sr)
into Staging.EmployeesByMartialStatus
from #temp;
go
--------------------------------------------------------------------------
-- procedure to retrieve selected pages
create procedure EmployeesByMartialStatus_GetPage
#page int
as
declare #MaxPage int;
select #MaxPage=MAX(Page) from Staging.EmployeesByMartialStatus;
set #page=case when #page not between 1 and #MaxPage then 1 else #page end;
select EmployeeID,NationalIDNumber,ContactID,LoginID,ManagerID
, Title,BirthDate,MaritalStatus,Gender,HireDate,SalariedFlag,VacationHours,SickLeaveHours
, CurrentFlag,rowguid,ModifiedDate
from Staging.EmployeesByMartialStatus
where Page=#page
GO
--------------------------------------------------------------------------
-- Usage
-- Load staging
exec dbo.EmployeesByMartialStatus 'M','NationalIDNumber';
-- Get pages 1 through n
exec dbo.EmployeesByMartialStatus_GetPage 1;
exec dbo.EmployeesByMartialStatus_GetPage 2;
-- ...etc (this would actually be a foreach loop, but that detail is omitted for brevity)
GO
I use this method of using EXEC():
-- SP parameters:
-- #query: Your query as an input parameter
-- #maximumRows: As number of rows per page
-- #startPageIndex: As number of page to filter
-- #sortBy: As a field name or field names with supporting DESC keyword
DECLARE #query nvarchar(max) = 'SELECT * FROM sys.Objects',
#maximumRows int = 8,
#startPageIndex int = 3,
#sortBy as nvarchar(100) = 'name Desc'
SET #query = ';WITH CTE AS (' + #query + ')' +
'SELECT *, (dt.pagingRowNo - 1) / ' + CAST(#maximumRows as nvarchar(10)) + ' + 1 As pagingPageNo' +
', pagingCountRow / ' + CAST(#maximumRows as nvarchar(10)) + ' As pagingCountPage ' +
', (dt.pagingRowNo - 1) % ' + CAST(#maximumRows as nvarchar(10)) + ' + 1 As pagingRowInPage ' +
'FROM ( SELECT *, ROW_NUMBER() OVER (ORDER BY ' + #sortBy + ') As pagingRowNo, COUNT(*) OVER () AS pagingCountRow ' +
'FROM CTE) dt ' +
'WHERE (dt.pagingRowNo - 1) / ' + CAST(#maximumRows as nvarchar(10)) + ' + 1 = ' + CAST(#startPageIndex as nvarchar(10))
EXEC(#query)
At result-set after query result columns:
Note:
I add some extra columns that you can remove them:
pagingRowNo : The row number
pagingCountRow : The total number of rows
pagingPageNo : The current page number
pagingCountPage : The total number of pages
pagingRowInPage : The row number that started with 1 in this page

Resources