Can we use Pivot Function in Materialized Views in Snowflake DB - snowflake-cloud-data-platform

Can we use Pivot Function in Materialized Views in Snowflake DB.Kindly respond.
Seen the documentation and it comes under " Nesting of subqueries within a materialized view."
https://docs.snowflake.net/manuals/user-guide/views-materialized.html#label-limitations-on-materialized-views.

You can create a materialized view that includes a pivot, but the aggregate functions supported remain limited per the documentation to which you linked.

Yes , You can do. Below code worked.
CREATE OR REPLACE TABLE monthly_sales(empid INT, amount INT, month TEXT)
AS SELECT * FROM VALUES
(1, 10000, 'JAN'),
(1, 400, 'JAN'),
(2, 4500, 'JAN'),
(2, 35000, 'JAN'),
(1, 5000, 'FEB'),
(1, 3000, 'FEB'),
(2, 200, 'FEB'),
(2, 90500, 'FEB'),
(1, 6000, 'MAR'),
(1, 5000, 'MAR'),
(2, 2500, 'MAR'),
(2, 9500, 'MAR'),
(1, 8000, 'APR'),
(1, 10000, 'APR'),
(2, 800, 'APR'),
(2, 4500, 'APR');
CREATE MATERIALIZED VIEW test1 AS
(SELECT EMPID AS EMP_ID, "'JAN'" AS JANUARY, "'FEB'" AS FEBRUARY, "'MAR'" AS MARCH,
"'APR'" AS APRIL
FROM monthly_sales
PIVOT(sum(amount) FOR MONTH IN ('JAN', 'FEB', 'MAR', 'APR'))
AS p)

Related

How could I replace a T-SQL cursor?

I would like to ask you how I could replace a cursor that I've inserted into my stored procedure.
Actually, we found that cursor is the only way out to manage my scenario, but as I've read this is not a best practise.
This is my scenario:I have to calculate recursively the stock row by row and set the season according to what has been calculated in the previous rows.
I can set the season when the transfer type is "purchase". The others transfers should be set with the correct season by a T-SQL query.
The table where I should calculate the season has the following template and fake data, but they reflect the real situation:
Transfer Table Example
The rows that have the "FlgSeason" set as null, are calculated as follow: in ascending order, the cursor start from the row 3 and go back the previous rows and calculate the amount of stock for each season and then update the column season with the minimum season with stock.
Here's the code I used:
CREATE TABLE [dbo].[transfers]
(
[rowId] [int] NULL,
[area] [int] NULL,
[store] [int] NULL,
[item] [int] NULL,
[date] [date] NULL,
[type] [nvarchar](50) NULL,
[qty] [int] NULL,
[season] [nvarchar](50) NULL,
[FlagSeason] [int] NULL
) ON [PRIMARY]
INSERT INTO [dbo].[transfers]
([rowId]
,[area]
,[store]
,[item]
,[date]
,[type]
,[qty]
,[season]
,[FlagSeason])
VALUES (1,1,20,300,'2015-01-01','Purchase',3,'2015-FallWinter',1)
, (2,1,20,300,'2015-01-01','Purchase',4,'2016-SpringSummer',1)
, (3,1,20,300,'2015-01-01','Sales',-1,null,null)
, (4,1,20,300,'2015-01-01','Sales',-2,null,null)
, (5,1,20,300,'2015-01-01','Sales',-1,null,null)
, (6,1,20,300,'2015-01-01','Sales',-1,null,null)
, (7,1,20,300,'2015-01-01','Purchase',4,'2016-FallWinter',1)
, (8,1,20,300,'2015-01-01','Sales',-1,null,null)
DECLARE #RowId as int
DECLARE db_cursor CURSOR FOR
Select RowID
from Transfers
where [FlagSeason] is null
order by RowID
OPEN db_cursor
FETCH NEXT FROM db_cursor INTO #RowId
WHILE ##FETCH_STATUS = 0
BEGIN
Update Transfers
set Season = (Select min (Season) as Season
from (
Select
Season
, SUM(QTY) as Qty
from Transfers
where RowID < #RowId
and [FlagSeason] = 1
group by Season
having Sum(QTY) > 0
)S
where s.QTY >= 0
)
, [FlagSeason] = 1
where rowId = #RowId
FETCH NEXT FROM db_cursor INTO #RowId
end
In this case the query would extract:
3 qty for season 2015 FW
4 for 2016 SS.
Than The update statment will set 2015-fw (the min over the two season with qty).
Then the courson go forward the row 4, and runs again the query to extract the stock updated considering the calculation at row 3. So the result should be
QTY 2 For 2015 FW
QTY 4 FOr 2016 SS
and then the update would set 2015 FW.
And so on.
The final output should be something like this:
Output
Actually, the only way-out was to implement a cursor and now it takes above 30/40 minutes to scan and update about 2,5 million rows. Do anybody know a solution without recurring to a cursor?
Thanks in advance!
Updated to run on 2008
IF OBJECT_ID('tempdb..#transfer') IS NOT NULL
DROP TABLE #transfer;
GO
CREATE TABLE #transfer (
RowID INT IDENTITY(1, 1) PRIMARY KEY NOT NULL,
Area INT,
Store INT,
Item INT,
Date DATE,
Type VARCHAR(50),
Qty INT,
Season VARCHAR(50),
FlagSeason INT
);
INSERT INTO #transfer ( Area,
Store,
Item,
Date,
Type,
Qty,
Season,
FlagSeason
)
VALUES (1, 20, 300, '20150101', 'Purchase', 3, '2015-SpringSummer', 1),
(1, 20, 300, '20150601', 'Purchase', 4, '2016-SpringSummer', 1),
(1, 20, 300, '20150701', 'Sales', -1, NULL, NULL),
(1, 20, 300, '20150721', 'Sales', -2, NULL, NULL),
(1, 20, 300, '20150901', 'Sales', -1, NULL, NULL),
(1, 20, 300, '20160101', 'Sales', -1, NULL, NULL),
(1, 20, 300, '20170101', 'Purchase', 4, '2017-SpringSummer', 1),
(1, 20, 300, '20170125', 'Sales', -1, NULL, NULL),
(1, 20, 300, '20170201', 'Sales', -1, NULL, NULL),
(1, 20, 300, '20170225', 'Sales', -1, NULL, NULL),
(1, 21, 301, '20150801', 'Purchase', 4, '2017-SpringSummer', 1),
(1, 21, 301, '20150901', 'Sales', -1, NULL, NULL),
(1, 21, 301, '20151221', 'Sales', -2, NULL, NULL),
(1, 21, 302, '20150801', 'Purchase', 1, '2016-SpringSummer', 1),
(1, 21, 302, '20150901', 'Purchase', 1, '2017-SpringSummer', 1),
(1, 21, 302, '20151101', 'Sales', -1, NULL, NULL),
(1, 21, 302, '20151221', 'Sales', -1, NULL, NULL),
(1, 20, 302, '20150801', 'Purchase', 1, '2016-SpringSummer', 1),
(1, 20, 302, '20150901', 'Purchase', 1, '2017-SpringSummer', 1),
(1, 20, 302, '20151101', 'Sales', -1, NULL, NULL),
(1, 20, 302, '20151221', 'Sales', -1, NULL, NULL);
WITH Purchases
AS (SELECT t1.RowID,
t1.Area,
t1.Store,
t1.Item,
t1.Date,
t1.Type,
t1.Qty,
t1.Season,
RunningInventory = ( SELECT SUM(t2.Qty)
FROM #transfer AS t2
WHERE t1.Type = t2.Type
AND t1.Area = t2.Area
AND t1.Store = t2.Store
AND t1.Item = t2.Item
AND t2.Date <= t1.Date
)
FROM #transfer AS t1
WHERE t1.Type = 'Purchase'
),
Sales
AS (SELECT t1.RowID,
t1.Area,
t1.Store,
t1.Item,
t1.Date,
t1.Type,
t1.Qty,
t1.Season,
RunningSales = ( SELECT SUM(ABS(t2.Qty))
FROM #transfer AS t2
WHERE t1.Type = t2.Type
AND t1.Area = t2.Area
AND t1.Store = t2.Store
AND t1.Item = t2.Item
AND t2.Date <= t1.Date
)
FROM #transfer AS t1
WHERE t1.Type = 'Sales'
)
SELECT Sales.RowID,
Sales.Area,
Sales.Store,
Sales.Item,
Sales.Date,
Sales.Type,
Sales.Qty,
Season = ( SELECT TOP 1
Purchases.Season
FROM Purchases
WHERE Purchases.Area = Sales.Area
AND Purchases.Store = Sales.Store
AND Purchases.Item = Sales.Item
AND Purchases.RunningInventory >= Sales.RunningSales
ORDER BY Purchases.Date, Purchases.Season
)
FROM Sales
UNION ALL
SELECT Purchases.RowID ,
Purchases.Area ,
Purchases.Store ,
Purchases.Item ,
Purchases.Date ,
Purchases.Type ,
Purchases.Qty ,
Purchases.Season
FROM Purchases
ORDER BY Sales.Area, Sales.Store, item, Sales.Date
*original answer below**
I don't understand the purpose of the flagseason column so I didn't include that. Essentially, this calculates a running sum for purchases and sales and then finds the season that has a purchase_to_date inventory of at least the sales_to_date outflow for each sales transaction.
IF OBJECT_ID('tempdb..#transfer') IS NOT NULL
DROP TABLE #transfer;
GO
CREATE TABLE #transfer (
RowID INT IDENTITY(1, 1) PRIMARY KEY NOT NULL,
Area INT,
Store INT,
Item INT,
Date DATE,
Type VARCHAR(50),
Qty INT,
Season VARCHAR(50),
FlagSeason INT
);
INSERT INTO #transfer ( Area,
Store,
Item,
Date,
Type,
Qty,
Season,
FlagSeason
)
VALUES (1, 20, 300, '20150101', 'Purchase', 3, '2015-FallWinter', 1),
(1, 20, 300, '20150601', 'Purchase', 4, '2016-SpringSummer', 1),
(1, 20, 300, '20150701', 'Sales', -1, NULL, NULL),
(1, 20, 300, '20150721', 'Sales', -2, NULL, NULL),
(1, 20, 300, '20150901', 'Sales', -1, NULL, NULL),
(1, 20, 300, '20160101', 'Sales', -1, NULL, NULL),
(1, 20, 300, '20170101', 'Purchase', 4, '2016-FallWinter', 1),
(1, 20, 300, '20170201', 'Sales', -1, NULL, NULL);
WITH Inventory
AS (SELECT *,
PurchaseToDate = SUM(CASE WHEN Type = 'Purchase' THEN Qty ELSE 0 END) OVER (ORDER BY Date ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW),
SalesToDate = ABS(SUM(CASE WHEN Type = 'Sales' THEN Qty ELSE 0 END) OVER (ORDER BY Date ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW))
FROM #transfer
)
SELECT Inventory.RowID,
Inventory.Area,
Inventory.Store,
Inventory.Item,
Inventory.Date,
Inventory.Type,
Inventory.Qty,
Season = CASE
WHEN Inventory.Season IS NULL
THEN ( SELECT TOP 1
PurchaseToSales.Season
FROM Inventory AS PurchaseToSales
WHERE PurchaseToSales.PurchaseToDate >= Inventory.SalesToDate
ORDER BY Inventory.Date
)
ELSE
Inventory.Season
END,
Inventory.PurchaseToDate,
Inventory.SalesToDate
FROM Inventory;
*UPDATED*******************************
You'll need an index on your data to help with the sorting in order to make this perform.
Possibly:
CREATE NONCLUSTERED INDEX IX_Transfer ON #transfer(Store, Item, Date) INCLUDE(Area,Qty,Season,Type)
You should see a index scan on the named index. It will not be a seek because the sample query does not filter any data and all of the data is included.
In addition, you need to remove Season from the Partition By clause of the SalesToDate. Resetting the sales for each season will throw your comparisons off because the rolling sales need to be compared to the rolling inventory in order for you to determine the source of sales inventory.
Two other tips for the partition clause:
Don't duplicate the fields between partition by and order by. The order of the partition fields doesn't matter since the aggregate is reset for each partition. At best, the ordered partition field will be ignored, at worst it may cause the optimizer to aggregate the fields in a particular order. This does not have any effect on the results, but can added unnecessary overhead.
Make sure your index matches the definition of the partition by/order by clause.
The index should be [partitioning fields, sequence doesn't matter] + [ordering fields, sequence needs to match order by clause].
In your scenario, the indexed columns should be on store, item, and then date. If date were before store or item, the index would not be used because the optimizer will need to first handle partitioning by store & item before sorting by date.
If you may have multiple areas in your data, the index and partition clauses would need to be
index: area, store, item, date
partition by: area, store, item order by date
Referring to Wes's answer, the solution proposed is almost fine. It works good but I've noticed that the assignment of the season doesn't work properly beacause, in my scenario, the stock should be calculated and updated by store and item itself. I've Updated the script adding some adjstments. Moreover, I've added some new "Fake" data to understand better my scenario and how it should work.
IF OBJECT_ID('tempdb..#transfer') IS NOT NULL
DROP TABLE #transfer;
GO
CREATE TABLE #transfer (
RowID INT IDENTITY(1, 1) PRIMARY KEY NOT NULL,
Area INT,
Store INT,
Item INT,
Date DATE,
Type VARCHAR(50),
Qty INT,
Season VARCHAR(50),
FlagSeason INT
);
INSERT INTO #transfer ( Area,
Store,
Item,
Date,
Type,
Qty,
Season,
FlagSeason
)
VALUES (1, 20, 300, '20150101', 'Purchase', 3, '2015-SpringSummer', 1),
(1, 20, 300, '20150601', 'Purchase', 4, '2016-SpringSummer', 1),
(1, 20, 300, '20150701', 'Sales', -1, NULL, NULL),
(1, 20, 300, '20150721', 'Sales', -2, NULL, NULL),
(1, 20, 300, '20150901', 'Sales', -1, NULL, NULL),
(1, 20, 300, '20160101', 'Sales', -1, NULL, NULL),
(1, 20, 300, '20170101', 'Purchase', 4, '2017-SpringSummer', 1),
(1, 20, 300, '20170125', 'Sales', -1, NULL, NULL),
(1, 20, 300, '20170201', 'Sales', -1, NULL, NULL),
(1, 20, 300, '20170225', 'Sales', -1, NULL, NULL),
(1, 21, 301, '20150801', 'Purchase', 4, '2017-SpringSummer', 1),
(1, 21, 301, '20150901', 'Sales', -1, NULL, NULL),
(1, 21, 301, '20151221', 'Sales', -2, NULL, NULL),
(1, 21, 302, '20150801', 'Purchase', 1, '2016-SpringSummer', 1),
(1, 21, 302, '20150901', 'Purchase', 1, '2017-SpringSummer', 1),
(1, 21, 302, '20151101', 'Sales', -1, NULL, NULL),
(1, 21, 302, '20151221', 'Sales', -1, NULL, NULL),
(1, 20, 302, '20150801', 'Purchase', 1, '2016-SpringSummer', 1),
(1, 20, 302, '20150901', 'Purchase', 1, '2017-SpringSummer', 1),
(1, 20, 302, '20151101', 'Sales', -1, NULL, NULL),
(1, 20, 302, '20151221', 'Sales', -1, NULL, NULL)
;
WITH Inventory
AS (SELECT *,
PurchaseToDate = SUM(CASE WHEN Type = 'Purchase' THEN Qty ELSE 0 END) OVER (partition by store, item ORDER BY store, item,Date ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW),
SalesToDate = ABS(SUM(CASE WHEN Type = 'Sales' THEN Qty ELSE 0 END) OVER (partition by store, item,season ORDER BY store, item, Date ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW))
FROM #transfer
)
SELECT Inventory.RowID,
Inventory.Area,
Inventory.Store,
Inventory.Item,
Inventory.Date,
Inventory.Type,
Inventory.Qty,
Season = CASE
WHEN Inventory.Season IS NULL
THEN ( SELECT TOP 1
PurchaseToSales.Season
FROM Inventory AS PurchaseToSales
WHERE PurchaseToSales.PurchaseToDate >= Inventory.SalesToDate
and PurchaseToSales.Item = inventory.item --//Added
and PurchaseToSales.store = inventory.store --//Added
and PurchaseToSales.Area = Inventory.area --//Added
ORDER BY Inventory.Date
)
ELSE
Inventory.Season
END,
Inventory.PurchaseToDate,
Inventory.SalesToDate
FROM Inventory
Here the output:
enter image description here
After these adjustments, it works fine, but if I switch the fake data with the real data that are within in a 6 milions row data table, the query becomes very slow (~400 rows extracted per minutes) because of the insert of these check inside the where clause of the subquery:
WHERE PurchaseToSales.PurchaseToDate >= Inventory.SalesToDate
and PurchaseToSales.Item = inventory.item --//Added
and PurchaseToSales.store = inventory.store --//Added
and PurchaseToSales.Area = Inventory.area --//Added
I've tryed to replace the subquery with the "Cross Apply" function but nothing has changed. Am I Missing somethings?
Thanks in advance

Convert colums to rows in SQL Server 2008

I have this table of data. Each row represents 6 months of data (Quantity).
EX: Year = 2016 and Term = 1 means Mon1 to Mon6 is Jan to June
If Term = 2 Means July to December.
Above data I have to show as below. Need 3 columns of data like
So each product must be 12 months data from current month.
Means, Current month June then 2016June to 2017May (Including current month)
Can you please suggest with some script to retrieve this data from above table?
try this, the part to get month number is tricky, I have used the column names, you need to change that part if column names are not the same i.e. mon1, mon2 and so on.
DECLARE #MyTable TABLE (ProductID INT, [Year] INT, Team INT, Mon1 INT, Mon2 INT, Mon3 INT, Mon4 INT, Mon5 INT, Mon6 INT)
INSERT INTO #MyTable
VALUES (1, 2016, 1, 10,20,30,40,50,60)
,(1, 2016, 2, 10,20,30,40,50,60)
,(1, 2017, 1, 10,20,30,40,50,60)
,(1, 2017, 2, 10,20,30,40,50,60)
,(2, 2016, 1, 10,20,30,40,50,60)
,(2, 2016, 2, 10,20,30,40,50,60)
,(2, 2017, 1, 10,20,30,40,50,60)
,(2, 2017, 2, 10,20,30,40,50,60)
,(3, 2016, 1, 10,20,30,40,50,60)
,(3, 2016, 2, 10,20,30,40,50,60)
,(3, 2017, 1, 10,20,30,40,50,60)
,(3, 2017, 2, 10,20,30,40,50,60)
,(4, 2016, 1, 10,20,30,40,50,60)
,(4, 2016, 2, 10,20,30,40,50,60)
,(4, 2017, 1, 10,20,30,40,50,60)
,(4, 2017, 2, 10,20,30,40,50,60)
DECLARE #FromYear INT, #FromYearMonth INT, #ToYear INT, #ToYearMonth INT
-- Get from - to yearmonth base on current date.
SELECT #FromYear = YEAR(GETDATE()),
#ToYear = YEAR( DATEADD(month,11,GETDATE())) -- add 11 months as it starts from current month
SELECT #FromYearMonth = #FromYear * 100 + MONTH(GETDATE()),
#ToYearMonth = #ToYear * 100 + MONTH( DATEADD(month,11,GETDATE()))
SELECT #FromYear, #ToYear, #FromYearMonth, #ToYearMonth -- check the dates.
;WITH CTE AS
(
SELECT
ProductID,
[Year],
Team,
Quantity,
CASE WHEN Team = 2 THEN RIGHT(Months,1) + 6 ELSE RIGHT(Months,1) END Mon, -- get month number
([Year] * 100) + CASE WHEN Team = 2 THEN RIGHT(Months,1) + 6 ELSE RIGHT(Months,1) END YearMonth -- get YearMonth value
FROM ( SELECT *
FROM #MyTable
WHERE [YEAR] BETWEEN #FromYear AND #ToYear -- filter data for year range
) x
UNPIVOT
( Quantity FOR Months IN (Mon1,Mon2,Mon3,Mon4,Mon5,Mon6) -- deflat row-cols
) p
)
SELECT ProductID, YearMonth, Quantity, Team, Mon
FROM CTE
WHERE YearMonth BETWEEN #FromYearMonth AND #ToYearMonth -- filer final data.

Select query using variable not running in mssql

Select query is not working when use variable in MSSQL2014
My Schema is :-
CREATE TABLE product
(idproduct int, name varchar(50), description varchar(50), tax decimal(18,0))
INSERT INTO product
(idproduct, name, description,tax)
VALUES
(1, 'abc', 'This is abc',10),
(2, 'xyz', 'This is xyz',20),
(3, 'pqr', 'This is pqr',15)
CREATE TABLE product_storage
(idstorage int,idproduct int,added datetime, quantity int, price decimal(18,0))
INSERT INTO product_storage
(idstorage,idproduct, added, quantity,price)
VALUES
(1, 1, 2010-01-01,0,10.0),
(2, 1, 2010-01-02,0,11.0),
(3, 1, 2010-01-03,10,12.0),
(4, 2, 2010-01-04,0,12.0),
(5, 2, 2010-01-05,10,11.0),
(6, 2, 2010-01-06,10,13.0),
(7, 3, 2010-01-07,10,14.0),
(8, 3, 2010-01-07,10,16.0),
(9, 3, 2010-01-09,10,13.0)
and i am executing below command:-
declare #price1 varchar(10)
SELECT p.idproduct, p.name, p.tax,
[#price1]=(SELECT top 1 s.price
FROM product_storage s
WHERE s.idproduct=p.idproduct AND s.quantity > 0
ORDER BY s.added ASC),
(#price1 * (1 + tax/100)) AS [price_with_tax]
FROM product p
;
This is not working in MSSQL, Please Help me out.
for detail check http://sqlfiddle.com/#!6/91ec2/296
And My query is working in MYSQL
Check for detail :- http://sqlfiddle.com/#!9/a71b8/1
Try this query
SELECT
p.idproduct
, p.name
, p.tax
, (t1.price * (1 + tax/100)) AS [price_with_tax]
FROM product p
inner join
(
SELECT ROW_NUMBER() over (PARTITION by s.idproduct order by s.added ASC) as linha, s.idproduct, s.price
FROM product_storage s
WHERE s.quantity > 0
) as t1
on t1.idproduct = p.idproduct and t1.linha = 1
Try it like this:
Explanantion: You cannot use a variable "on the fly", but you can do row-by-row calculation in an APPLY...
SELECT p.idproduct, p.name, p.tax,
Price.price1,
(price1 * (1 + tax/100)) AS [price_with_tax]
FROM product p
CROSS APPLY (SELECT top 1 s.price
FROM product_storage s
WHERE s.idproduct=p.idproduct AND s.quantity > 0
ORDER BY s.added ASC) AS Price(price1)
;
EDIT: Your Fiddle uses a bad literal date format, try this:
INSERT INTO product_storage
(idstorage,idproduct, added, quantity,price)
VALUES
(1, 1, '20100101',0,10.0),
(2, 1, '20100102',0,11.0),
(3, 1, '20100103',10,12.0),
(4, 2, '20100104',0,12.0),
(5, 2, '20100105',10,11.0),
(6, 2, '20100106',10,13.0),
(7, 3, '20100107',10,14.0),
(8, 3, '20100108',10,16.0),
(9, 3, '20100109',10,13.0)
Here is the correct schema for SQL Server and query runs perfect as Shnugo Replied.
VALUES
(1, 1, convert(datetime,'2010-01-01'),0,10.0),
(2, 1, convert(datetime,'2010-01-02'),0,11.0),
(3, 1, convert(datetime,'2010-01-03'),10,12.0),
(4, 2, convert(datetime,'2010-01-04'),0,12.0),
(5, 2, convert(datetime,'2010-01-05'),10,11.0),
(6, 2, convert(datetime,'2010-01-06'),10,13.0),
(7, 3, convert(datetime,'2010-01-07'),10,14.0),
(8, 3, convert(datetime,'2010-01-07'),10,16.0),
(9, 3, convert(datetime,'2010-01-09'),10,13.0)

SQL Server - resetting a running total

I am trying to derive a paging logic.
I have fields like:
RecordNo Lines
1 20
2 130
3 50
4 60
5 350
6 100
Say my pagesize is 170 lines.
The result I want to get is:
RecordNo Lines CumSum PageNo
1 20 20 1
2 130 150 1
3 50 50 2 (as cumulative sum 200 exceeds 170, reset to 0)
4 60 110 2
5 350 350 3 ((as cumulative sum 460 exceeds 170, reset to 0)
6 100 100 4 ((as cumulative sum 460 exceeds 170, reset to 0)
I can do it using cursor, but is there a way to achieve it by SQL(s) only?
Here is the ddl and sample data as posted by the OP:
CREATE TABLE PAGING (RECORDNO INT, LINES INT );
INSERT INTO PAGING VALUES(1,20);
INSERT INTO PAGING VALUES(2,130);
INSERT INTO PAGING VALUES(3,50);
INSERT INTO PAGING VALUES(4,60);
INSERT INTO PAGING VALUES(5,350);
INSERT INTO PAGING VALUES(6,100);
Update:
Zohar, Thanks for looking into this. The query worked perfectly with the data I gave but when I extended with more data, it does not give correct result as pagebase does not move with sum exceeding 170.
Here is the data I tried the SQL with:
INSERT [dbo].[PAGING] ([RECORDNO], [LINES]) VALUES (1, 20);
INSERT [dbo].[PAGING] ([RECORDNO], [LINES]) VALUES (2, 130);
INSERT [dbo].[PAGING] ([RECORDNO], [LINES]) VALUES (3, 50);
INSERT [dbo].[PAGING] ([RECORDNO], [LINES]) VALUES (4, 60);
INSERT [dbo].[PAGING] ([RECORDNO], [LINES]) VALUES (5, 350);
INSERT [dbo].[PAGING] ([RECORDNO], [LINES]) VALUES (6, 100);
INSERT [dbo].[PAGING] ([RECORDNO], [LINES]) VALUES (7, 20);
INSERT [dbo].[PAGING] ([RECORDNO], [LINES]) VALUES (8, 10);
INSERT [dbo].[PAGING] ([RECORDNO], [LINES]) VALUES (9, 20);
INSERT [dbo].[PAGING] ([RECORDNO], [LINES]) VALUES (10, 30);
INSERT [dbo].[PAGING] ([RECORDNO], [LINES]) VALUES (11, 5);
INSERT [dbo].[PAGING] ([RECORDNO], [LINES]) VALUES (12, 5);
INSERT [dbo].[PAGING] ([RECORDNO], [LINES]) VALUES (13, 5);
INSERT [dbo].[PAGING] ([RECORDNO], [LINES]) VALUES (14, 10);
INSERT [dbo].[PAGING] ([RECORDNO], [LINES]) VALUES (15, 205);
INSERT [dbo].[PAGING] ([RECORDNO], [LINES]) VALUES (16, 156);
INSERT [dbo].[PAGING] ([RECORDNO], [LINES]) VALUES (17, 5);
INSERT [dbo].[PAGING] ([RECORDNO], [LINES]) VALUES (18, 2);
INSERT [dbo].[PAGING] ([RECORDNO], [LINES]) VALUES (19, 7);
Update:
Here is the complete solution and it's fiddle link:
WITH cte AS
(
SELECT RECORDNO,
LINES,
SUM(LINES) OVER (ORDER BY RECORDNO) CumSum,
SUM(LINES) OVER (ORDER BY RECORDNO) / 170 AS PageNumberBase
FROM PAGING
)
SELECT RECORDNO,
LINES,
SUM(LINES) OVER (PARTITION BY PageNumberBase ORDER BY RECORDNO) As CumSum,
DENSE_RANK() OVER(ORDER BY PageNumberBase) As PageNumber
FROM cte
ORDER BY RECORDNO
First version:
Using dense_rank window function and a cte I was able to produce something very close, just the cumSum is not perfect.
Unfortunatly I don't have the time to play with it more so I'll leave it here with a link to sqlFiddle and hope the OP or someone else will be able to complete the solution:
WITH cte AS
(
SELECT RECORDNO,
LINES,
SUM(LINES) OVER (ORDER BY RECORDNO) CumSum,
SUM(LINES) OVER (ORDER BY RECORDNO) / 170 AS PageNumber
FROM PAGING
)
SELECT RECORDNO,
LINES,
PageNumber,
CumSum - (170 * PageNumber) As CumSum,
DENSE_RANK() OVER(ORDER BY PageNumber)
FROM cte
Here is the link to the fiddle

SQL Server stored procedure return table with count of uniques per user

I have found a little here and a little there but nothing that really covers the question that I have so here goes. I have ordered a book from amazon but it won't be here for another week and I really need to this ASAP
I have two tables which contain basically the following.
Table A has the users id number, login name, wins, losses, ties
Table B has the User id number, when the game ended, game state
What I want is to create a stored procedure that will return the top 10 for wins for the last week.
Loginname | total wins, last 7 days | all wins | all losses | all ties
Name1 | 10 | 40 | 8 | 6
Name2 | 9 | 96 | 76 | 19
etc....
What I have so far is:
SELECT A.login,
A.draws_count,
A.losses_count,
A.wins_count
FROM [TableB] AS B
INNER JOIN
[TableA] AS A
ON B.won_by_id = A.id
WHERE B.win_defined_time > (GETDATE() - 7)
AND B.state = 'OVER';
From there I have no clue how to return the table that I need. Any assistance would be greatly appreciated. (also keep in mind that the 'total wins for the last 7 days' field does not exist in either table.)
Assuming a schema and sample data such as the following:
CREATE TABLE [dbo].[Competitors]
(
[id] INT NOT NULL,
[login_name] VARCHAR (50) NOT NULL,
[wins] INT NOT NULL,
[losses] INT NOT NULL,
[ties] INT NOT NULL
) ON [PRIMARY]
CREATE TABLE [dbo].[Events]
(
[id] INT NOT NULL,
[Competitorid] VARCHAR (50) NOT NULL,
[EventDateTime] DATETIME NOT NULL,
[winner] BIT NOT NULL,
[EventStatus] VARCHAR (50) NOT NULL
) ON [PRIMARY]
INSERT INTO Competitors (id, login_name, wins, losses, ties)
VALUES (1, 'Player 1', 40, 8, 6),
(2, 'Player 2', 96, 76, 19),
(3, 'Player 3', 1, 0, 0)
INSERT INTO Events (id, Competitorid, EventDateTime, winner, EventStatus)
VALUES (1, 1, '2013-01-25 01:05:25.000', 1, 'OVER'),
(2, 1, '2013-01-26 01:05:25.000', 1, 'OVER'),
(3, 1, '2013-01-27 14:05:25.000', 1, 'OVER'),
(4, 1, '2013-01-28 01:05:25.000', 1, 'OVER'),
(5, 1, '2013-01-29 15:05:25.000', 1, 'OVER'),
(6, 1, '2013-01-30 01:05:25.000', 1, 'OVER'),
(7, 1, '2013-01-31 22:05:25.000', 1, 'OVER'),
(8, 1, '2013-02-01 01:05:25.000', 1, 'OVER'),
(9, 1, '2013-02-02 21:05:25.000', 1, 'OVER'),
(10, 1, '2013-01-02 11:05:25.000', 0, 'INPROGRESS'),
(11, 1, '2013-01-30 01:05:25.000', 1, 'OVER'),
(12, 2, '2013-01-25 11:05:25.000', 1, 'OVER'),
(13, 2, '2013-01-26 01:05:25.000', 1, 'OVER'),
(14, 2, '2013-01-27 11:25:25.000', 1, 'OVER'),
(15, 2, '2013-01-28 01:05:25.000', 1, 'OVER'),
(16, 2, '2013-01-29 11:45:25.000', 1, 'OVER'),
(17, 2, '2013-01-30 01:45:25.000', 1, 'OVER'),
(18, 2, '2013-01-31 12:15:25.000', 1, 'OVER'),
(19, 2, '2013-02-01 01:05:25.000', 1, 'OVER'),
(20, 2, '2013-02-02 22:25:25.000', 1, 'OVER'),
(21, 2, '2013-02-02 15:05:25.000', 0, 'INPROGRESS'),
(22, 2, '2013-01-25 01:05:25.000', 1, 'OVER'),
(23, 1, '2013-01-30 01:05:25.000', 0, 'OVER'),
(24, 2, '2013-01-30 01:05:25.000', 0, 'OVER'),
(25, 3, '2012-01-30 01:05:25.000', 1, 'OVER')
You can return the names and wins data for the ten people with the most wins in the last 7 days using the following query:
SELECT TOP 10 login_name,
recent_wins,
wins AS 'All Wins',
losses AS 'All losses',
ties AS 'All Ties'
FROM Competitors
INNER JOIN
(SELECT COUNT(*) AS recent_wins,
Competitorid
FROM events
WHERE winner = 1
AND eventdatetime BETWEEN GetDate() - 7 AND GetDate()
AND EventStatus = 'OVER'
GROUP BY Competitorid) AS recent_event_winners
ON Competitors.ID = recent_event_winners.Competitorid;
ORDER BY recent_wins DESC
This query works by joining the data in the Competitors table together with a subquery on the data in the events table that is calaculating the number of recent wins and then taking the top 10 results. For users with a win in the last seven days, the subquery returns the count of the number of wins the user has had for events that are over during that time period.
Note: users without any wins during the time period will not be returned by either query so the results may have fewer than 10 results.
A SQL Fiddle with the above sql creation scripts and query can be found at http://sqlfiddle.com/#!3/0ebc8/2

Resources