SQL Server : filling in sparse values - sql-server

Need help with SQL Server; what will be the easiest way to update the missing begin and end inventory values? Values shown are verified numbers for that week.
+------+--------+-------+----------+-----+
| Week | ItemNr | Begin | Increase | End |
+------+--------+-------+----------+-----+
| 1 | 1001 | 100 | -10 | 90 |
| 2 | 1001 | | 0 | |
| 3 | 1001 | 90 | 0 | 90 |
| 4 | 1001 | | 20 | |
| 5 | 1001 | | 100 | |
| 6 | 1001 | | -20 | |
| 7 | 1001 | | 0 | |
| 8 | 1001 | 200 | 10 | 210 |
| 9 | 1001 | | 0 | |
| 10 | 1001 | | -50 | -50 |
| 11 | 1001 | | 0 | |
+------+--------+-------+----------+-----+
if Begin is NULL then previous week End
END = Begin + Increase

A couple of window functions gets you the result. ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW is the default scope when you specify an ORDER BY in the OVER clause, however, as the other window function has it explicitly stated and doesn't use the default scope, I felt it was important to show; as you can see the difference.
WITH VTE AS(
SELECT *
FROM (VALUES ( 1,1001,100,-10),
( 2,1001,NULL, 0),
( 3,1001, 90, 0),
( 4,1001,NULL, 20),
( 5,1001,NULL,100),
( 6,1001,NULL,-20),
( 7,1001,NULL, 0),
( 8,1001,200, 10),
( 9,1001,NULL, 0),
(10,1001,NULL,-50),
(11,1001,NULL, 0)) V(Week, ItemNr, [Begin],Increase))
SELECT Week,
ItemNr,
ISNULL([Begin],S.Starting + SUM(Increase) OVER (PARTITION BY ItemNr ORDER BY Week ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING)) AS [Begin],
Increase,
S.Starting + SUM(Increase) OVER (PARTITION BY ItemNr ORDER BY Week ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS [End]
FROM VTE V
CROSS APPLY (SELECT TOP 1 [Begin] AS Starting
FROM VTE ca
WHERE ca.ItemNr = V.ItemNr
ORDER BY Week ASC) S;
Note: This appears to be some kind of stock system. it's worth noting that this doesn't take into account that the stock level could go wrong. For example, say an item is stolen; the value of [End] and [Begin] (when it has a value of NULL) would be wrong in those events. If this needs to be taken into consideration, then we need to know this in the question.
Edit: Solution to cater for "lost" stock. With this, this takes the last "known" value for the stock and aggregates. So, for this example, in Week 1, although 10 items were "sold", the start of the week 2 shows the beginning value as 35. This means that 5 items are missing (stolen?). This there needs to effect all stock levels going forward. Thus you get:
WITH VTE AS(
SELECT *
FROM (VALUES ( 1,1001,100,-10),
( 2,1001,NULL, 0),
( 3,1001, 90, 0),
( 4,1001,NULL, 20),
( 5,1001,NULL,100),
( 6,1001,NULL,-20),
( 7,1001,NULL, 0),
( 8,1001,200, 10),
( 9,1001,NULL, 0),
(10,1001,NULL,-50),
(11,1001,NULL, 0),
(1,1002,50,-10),
(2,1002,35,0),--Begin value lowered. Some items went "missing"
(3,1002,NULL,5),
(4,1002,40,10)) V(Week, ItemNr, [Begin],Increase))
SELECT Week,
ItemNr,
[Begin],
Increase,
LastKnown,
WeekKnown,
ISNULL([Begin],S.LastKnown + SUM(Increase) OVER (PARTITION BY ItemNr, WeekKnown ORDER BY Week ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING)) AS ActualBegin,
ISNULL([Begin],S.LastKnown + SUM(Increase) OVER (PARTITION BY ItemNr, WeekKnown ORDER BY Week ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)) AS [End]
FROM VTE V
CROSS APPLY (SELECT TOP 1 [Begin] AS LastKnown, Week AS WeekKnown
FROM VTE ca
WHERE ca.ItemNr = V.ItemNr
AND ca.Week <= V.Week
AND ca.[Begin] IS NOT NULL
ORDER BY Week DESC) S
ORDER BY V.ItemNr, V.Week;

Here is another way too
SELECT T1.Week,
T1.ItemNr,
CASE WHEN T1.[Begin] IS NULL THEN
(SELECT MAX([Begin]) + SUM(Increase) FROM #T WHERE Week < T1.Week AND ItemNr = T1.ItemNr)
ELSE
T1.[Begin]
END [Begin],
T1.Increase,
CASE WHEN T1.[Begin] IS NULL THEN
(SELECT MAX([Begin]) + SUM(Increase) FROM #T WHERE Week < T1.Week AND ItemNr = T1.ItemNr)
ELSE
T1.[Begin]
END + T1.Increase [End]
FROM #T T1;
Returns:
+------+--------+-------+----------+-----+
| Week | ItemNr | Begin | Increase | End |
+------+--------+-------+----------+-----+
| 1 | 1001 | 100 | -10 | 90 |
| 2 | 1001 | 90 | 0 | 90 |
| 3 | 1001 | 90 | 0 | 90 |
| 4 | 1001 | 90 | 20 | 110 |
| 5 | 1001 | 110 | 100 | 210 |
| 6 | 1001 | 210 | -20 | 190 |
| 7 | 1001 | 190 | 0 | 190 |
| 8 | 1003 | 200 | 10 | 210 |
| 9 | 1003 | 210 | 0 | 210 |
| 10 | 1003 | 210 | -50 | 160 |
| 11 | 1003 | 160 | 0 | 160 |
+------+--------+-------+----------+-----+
Demo

Related

What's an efficient way to count "previous" rows in SQL?

Hard to phrase the title for this one.
I have a table of data which contains a row per invoice. For example:
| Invoice ID | Customer Key | Date | Value | Something |
| ---------- | ------------ | ---------- | ------| --------- |
| 1 | A | 08/02/2019 | 100 | 1 |
| 2 | B | 07/02/2019 | 14 | 0 |
| 3 | A | 06/02/2019 | 234 | 1 |
| 4 | A | 05/02/2019 | 74 | 1 |
| 5 | B | 04/02/2019 | 11 | 1 |
| 6 | A | 03/02/2019 | 12 | 0 |
I need to add another column that counts the number of previous rows per CustomerKey, but only if "Something" is equal to 1, so that it returns this:
| Invoice ID | Customer Key | Date | Value | Something | Count |
| ---------- | ------------ | ---------- | ------| --------- | ----- |
| 1 | A | 08/02/2019 | 100 | 1 | 2 |
| 2 | B | 07/02/2019 | 14 | 0 | 1 |
| 3 | A | 06/02/2019 | 234 | 1 | 1 |
| 4 | A | 05/02/2019 | 74 | 1 | 0 |
| 5 | B | 04/02/2019 | 11 | 1 | 0 |
| 6 | A | 03/02/2019 | 12 | 0 | 0 |
I know I can do this using either a CTE like this...
(
select
count(*)
from table
where
[Customer Key] = t.[Customer Key]
and [Date] < t.[Date]
and Something = 1
)
But I have a lot of data and that's pretty slow. I know I can also use cross apply to achieve the same thing, but as far as I can tell that's not any better performing than just using a CTE.
So; is there a more efficient means of achieving this, or do I just suck it up?
EDIT: I originally posted this without the requirement that only rows where Something = 1 are counted. Mea culpa - I asked it in a hurry. Unfortunately I think that this means I can't use row_number() over (partition by [Customer Key])
Assuming you're using SQL Server 2012+ you can use Window Functions:
COUNT(CASE WHEN Something = 1 THEN CustomerKey END) OVER (PARTITION BY CustomerKey ORDER BY [Date]
ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) -1 AS [Count]
Old answer before new required logic:
COUNT(CustomerKey) OVER (PARTITION BY CustomerKey ORDER BY [Date]
ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) -1 AS [Count]
If you're not using 2012 an alternative is to use ROW_NUMBER
ROW_NUMBER() OVER (PARTITION BY CustomerKey ORDER BY [Date]) - 1 AS Count

partitioning and selecting clusters with multiple records

The header of question might be confusing so I put my issue into words:
I have a table with master_ids, ids and years. A master_id can contain different ids. Each Id is associated with a year. I already partitioned by master_id and gave each year a rank (year_rank).
+-----------+----+------+-----------+
| master_id | id | year | year_rank |
+-----------+----+------+-----------+
| 100 | 1 | 2017 | 1 |
| 100 | 2 | 2016 | 2 |
| 100 | 3 | 2015 | 3 |
| 200 | 9 | 2001 | 1 |
| 300 | 5 | 2020 | 1 |
| 300 | 4 | 2010 | 2 |
| 400 | 7 | 1999 | 1 |
| 400 | 11 | 1996 | 2 |
| 500 | 20 | 1999 | 1 |
| 600 | 25 | 2005 | 1 |
| 600 | 29 | 2005 | 1 |
+-----------+----+------+-----------+
My goal is to pick only the clusters which have more than 1 record in order to compare it:
+-----------+----+------+-----------+
| master_id | id | year | year_rank |
+-----------+----+------+-----------+
| 100 | 1 | 2017 | 1 |
| 100 | 2 | 2016 | 2 |
| 100 | 3 | 2015 | 3 |
| 300 | 5 | 2020 | 1 |
| 300 | 4 | 2010 | 2 |
| 400 | 7 | 1999 | 1 |
| 400 | 11 | 1996 | 2 |
+-----------+----+------+-----------+
If I put where year_rank > 1 it eliminates the first rows in the clusters with multiple records which I don't want. How can I solve this? I thought about a group by but I don't know how to apply this.
Thank you very much!
Edit: Completely updated for new requirement. This will only show records for master_ids which have multiple years associated with them, however it will show all records associated for that master_id even if they are in the same year (see 600 vs 700).
SQLFiddle here
We will perform your year_rank in cte1 so we can aggregate it with the MAX() function in cte2 to filter out where max is greater than whatever variable you want to put there. We then query cte1 and join on cte2 to only show the records for master_ids that have multiple years associated with them.
WITH cte1 AS (
SELECT
master_id,
id,
year,
RANK() OVER (PARTITION BY master_id ORDER BY year DESC) AS year_rank
FROM tbl
),
cte2 AS (
SELECT
master_id
FROM cte1
GROUP BY master_id
HAVING MAX(year_rank) > 1
)
SELECT
cte1.master_id,
cte1.id,
cte1.year,
cte1.year_rank
FROM cte1
JOIN cte2 ON
cte1.master_id = cte2.master_id
I figured out to eliminate rows which don't have a discrepancy in years within their master_id:
select *,
case
when (master_id = (lead(master_id) over (order by master_id))) and
(year = (lead(service_year) over (order by master_id))) then 'no show'
when (master_id = (lag(master_id) over (order by master_id))) and
(year = (lag(service_year) over (order by master_id))) then 'no show'
else ''
end as note
from table
Now I can put all of that into a temp table and delete the records which have 'no show' in the note column.
What do you think of this? Is there an easier way?

Remove cursur SQL statement

I want to remove a cursor in SQL, to increase performance (and because I want to learn how to use best practice and best practice is supposed to be set based, without cursor).
Anyway, I have a temp table that looks like this:
+------------+--------+-------+----+
| Period | Change | Value | NR |
+------------+--------+-------+----+
| 201705 | 7 | 26055 | 1 |
| 201704 | 29 | 0 | 2 |
| 201703 | -92 | 0 | 3 |
| 201702 | -338 | 0 | 4 |
| 201701 | 81 | 0 | 5 |
| 201612 | 107 | 0 | 6 |
| 201611 | 72 | 0 | 7 |
| 201610 | 54 | 0 | 8 |
| 201609 | 64 | 0 | 9 |
| 201608 | 47 | 0 | 10 |
| 201607 | 23 | 0 | 11 |
| 201606 | 45 | 0 | 12 |
+------------+--------+-------+----+
Currently, the Cursor acts as follows:
DECLARE #Value INT
BEGIN
DECLARE c_Value CURSOR FOR
SELECT NR
FROM ##TMP
WHERE Value = 0
----
OPEN c_Value
FETCH NEXT FROM c_Value
INTO #Value
WHILE ##FETCH_STATUS = 0
BEGIN
SELECT #Value = Value - Change
FROM ##TMP
WHERE NR = (Select MAX(NR) From ##TMP WHERE Value <> 0)
BEGIN
UPDATE ##TMP
SET Value = #Value
WHERE NR = (Select MAX(NR)+1 From ##TMP WHERE Value <> 0)
END
FETCH NEXT FROM c_Value
INTO #Value
END
CLOSE c_Value
DEALLOCATE c_Value
END
Result:
+------------+--------+-------+----+
| Period | Change | Value | NR |
+------------+--------+-------+----+
| 201705 | 7 | 26055 | 1 |
| 201704 | 29 | 26048 | 2 |
| 201703 | -92 | 26019 | 3 |
| 201702 | -338 | 26111 | 4 |
| 201701 | 81 | 26449 | 5 |
| 201612 | 107 | 26368 | 6 |
| 201611 | 72 | 26261 | 7 |
| 201610 | 54 | 26189 | 8 |
| 201609 | 64 | 26135 | 9 |
| 201608 | 47 | 26071 | 10 |
| 201607 | 23 | 26024 | 11 |
| 201606 | 45 | 26001 | 12 |
+------------+--------+-------+----+
So, how can I achieve this result, without the use of a cursor? I tried it with a CTE, but I can not get this result.
First you need get the starting value.
SELECT [Value] as StartValue
FROM Table1
WHERE NR = 1
Then using cumulative SUM() you can modify the starting Value, notice you have to ignore the [Change] value for each row
SQL DEMO
WITH CTE as (
SELECT [Value] as StartValue
FROM Table1
WHERE NR = 1
)
SELECT T.*,
- SUM(CHANGE) OVER (ORDER BY [NR])
+ [CHANGE] as TotalChange, -- just for debug, dont need this.
CTE.StartValue
- SUM([CHANGE]) OVER (ORDER BY [NR])
+ [CHANGE] as NewValue
FROM Table1 T
CROSS JOIN CTE
OUTPUT
SQL Server 2012 or higher:
CREATE TABLE ##TMP (
Period int
,Change float
,Value float
,Nr int
);
INSERT INTO ##TMP VALUES
(201705, 7 , 26055, 1)
,(201704, 29 , 0, 2)
,(201703, -92 , 0, 3)
,(201702, -338 , 0, 4)
,(201701, 81 , 0, 5)
,(201612, 107 , 0, 6)
,(201611, 72 , 0, 7)
,(201610, 54 , 0, 8)
,(201609, 64 , 0, 9)
,(201608, 47 , 0,10)
,(201607, 23 , 0,11)
,(201606, 45 , 0,12)
;with cte as (
SELECT Period, Change, value as Value_Org, Nr, SUM(Value - Change) OVER (ORDER BY Nr ASC ) as Value
FROM ##TMP
)
select a.Period, a.Change, a.nr, a.value_org, a.value, b.value,
isnull(b.value, a.value_org)
from cte as a
left outer join cte as b
on a.nr = b.nr+1
order by a.Nr
This can be solved by using the windowing functions introduced in SQL Server 2014.
select period,
change,
NR = Row_Number() Over(Order by period),
Value = Sum(Change) Over(Order by period rows unbounded preceding)
This was freehand and may not parse, but should get you close enough.

SQL Server query for next row value where previous row value

This query gives me Event values from 1 to 20 within an hour, how to add to that if a consecutive Event value is >=200 as well?
SELECT ID, count(Event) as numberoftimes
FROM table_name
WHERE Event >=1 and Event <=20
GROUP BY ID, DATEPART(HH, AtHour)
HAVING DATEPART(HH, AtHour) <= 1
ORDER BY ID desc
In this dummy 24h table:
+----+-------+--------+
| ID | Event | AtHour |
+----+-------+--------+
| 1 | 1 | 11:00 |
| 1 | 4 | 11:01 |
| 1 | 1 | 11:02 |
| 1 | 20 | 11:03 |
| 1 | 200 | 11:04 |
| 1 | 1 | 13:00 |
| 1 | 1 | 13:05 |
| 1 | 2 | 13:06 |
| 1 | 500 | 13:07 |
| 1 | 39 | 13:10 |
| 1 | 50 | 13:11 |
| 1 | 2 | 13:12 |
+----+-------+--------+
I would like to select IDs with Event with values with range between 1 and 20 followed immediately by value greater than or equal to 200 within an hour.
Expected result should be something like that:
+----+--------+
| ID | AtHour |
+----+--------+
| 1 | 11 |
| 1 | 13 |
| 2 | 11 |
| 2 | 14 |
| 3 | 09 |
| 3 | 12 |
+----+--------+
or just how many times it has happened for unique ID instead of which hour.
Please excuse me I am still rusty with post formatting!
CREATE TABLE data (Id INT, Event INT, AtHour SMALLDATETIME);
INSERT data (Id, Event, AtHour) VALUES
(1,1,'2017-03-16 11:00:00'),
(1,4,'2017-03-16 11:01:00'),
(1,1,'2017-03-16 11:02:00'),
(1,20,'2017-03-16 11:03:00'),
(1,200,'2017-03-16 11:04:00'),
(1,1,'2017-03-16 13:00:00'),
(1,1,'2017-03-16 13:05:00'),
(1,2,'2017-03-16 13:06:00'),
(1,500,'2017-03-16 13:07:00'),
(1,39,'2017-03-16 13:10:00')
;
; WITH temp as (
SELECT rownum = ROW_NUMBER() OVER (PARTITION BY id ORDER BY AtHour)
, *
FROM data
)
SELECT a.id, DATEPART(HOUR, a.AtHour) as AtHour, COUNT(*) AS NumOfPairs
FROM temp a JOIN temp b ON a.rownum = b.rownum-1
WHERE a.Event BETWEEN 1 and 20 AND b.Event >= 200
AND DATEDIFF(MINUTE, a.AtHour, b.AtHour) <= 60
GROUP BY a.id, DATEPART(HOUR, a.AtHour)
;

SQL Server: - Split a Single Record and divide the Amount Values and insert as Rows

PROBLEM
Have been trying to work on this where for a particular From and ToDate, we need to take the difference between the Dates and the divide the Amount values by the difference and split the single record in Sample 1 to 5 rows in Resultant.
sr| Number |From_date|To_Date|Amount_1|Amount_2|Amount_3|Amount_4|Type
----------------------------------------------------------------------------
1 |20140911204120|Jan-14 |May-14 |5000 |2500 |1000 |200 |A
2 |20140911204122|Feb-14 |Apr-14 |6000 |3500 |2000 |1200 |R
3 |20140911204124|Feb-14 |Jun-14 |7000 |4500 |3000 |2200 |R
4 |20140911204126|Jul-14 |Sep-14 |8000 |5500 |4000 |3200 |R
5 |20140911204128|Mar-14 |Aug-14 |9000 |6500 |5000 |4200 |A
Resultant: - Record 1 after the process
So here in record 1
1. We subtract the From Date and the To Date which gives us 5.
2. We then Divide the Amount Values by 5 and split them as 5 rows and add the
row values for each individual month
--------------------------------------------------------------------
Sr| Number |Months |Amount_1|Amount_2|Amount_3|Amount_4|Type
--------------------------------------------------------------------
1 |20140911204120|January |1000 |500 |200 |10 |A
2 |20140911204120|February|1000 |500 |200 |10 |A
3 |20140911204120|March |1000 |500 |200 |10 |A
4 |20140911204120|April |1000 |500 |200 |10 |A
5 |20140911204120|May |1000 |500 |200 |10 |A
You didn't mention which version of SQL Server you are using, so I did this with 2012, but it should probably work on 2008 too if needed (although theTRY_CONVERTused for parsing dates has to be changed). Also, I'm guessing your sample desired output is wrong forAmount_4as 200/5=40 not 10.
Note that all the divisions are integer which meant potential loss of precision - if you need more precise values you have to cast to some appropriate type (decimal or float).
SQL Fiddle
MS SQL Server 2012 Schema Setup:
CREATE TABLE YourTable
([sr] int, [Number] bigint, [From_date] varchar(6), [To_date] varchar(6), [Amount_1] int, [Amount_2] int, [Amount_3] int, [Amount_4] int, [Type] varchar(1));
INSERT INTO YourTable ([sr], [Number], [From_date], [To_date], [Amount_1], [Amount_2], [Amount_3], [Amount_4], [Type])
VALUES
(1, 20140911204120, 'Jan-14', 'May-14', 5000, 2500, 1000, 200, 'A'),
(2, 20140911204122, 'Feb-14', 'Apr-14', 6000, 3500, 2000, 1200, 'R'),
(3, 20140911204124, 'Feb-14', 'Jun-14', 7000, 4500, 3000, 2200, 'R'),
(4, 20140911204126, 'Jul-14', 'Sep-14', 8000, 5500, 4000, 3200, 'R'),
(5, 20140911204128, 'Mar-14', 'Aug-14', 9000, 6500, 5000, 4200, 'A');
Query 1:
;WITH cte (Date, To_date, [Number], [Amount_1], [Amount_2], [Amount_3], [Amount_4], [Type]) AS
(
SELECT
Date = TRY_PARSE(From_date AS DATE),
To_date = TRY_PARSE(To_date AS DATE),
[Number],
[Amount_1],
[Amount_2],
[Amount_3],
[Amount_4],
[Type]
FROM YourTable
UNION ALL
SELECT
DATEADD(MONTH, 1, Date),
To_date,
[Number],
[Amount_1],
[Amount_2],
[Amount_3],
[Amount_4],
[Type]
FROM cte
WHERE Date < To_date
)
SELECT
Sr = ROW_NUMBER() OVER (PARTITION BY Number ORDER BY Number, Date),
Number,
Months,
Amount_1 = Amount_1/c,
Amount_2 = Amount_2/c,
Amount_3 = Amount_3/c,
Amount_4 = Amount_4/c,
Type
FROM (
SELECT
Number,
Months = DATENAME(MONTH, Date),
Date,
Amount_1,
Amount_2,
Amount_3,
Amount_4,
Type,
(SELECT COUNT(*) FROM cte WHERE Number = c.Number GROUP BY Number) AS c
FROM cte c
) a
ORDER BY Number, Date
OPTION (MAXRECURSION 1000)
Results:
| SR | NUMBER | MONTHS | AMOUNT_1 | AMOUNT_2 | AMOUNT_3 | AMOUNT_4 | TYPE |
|----|----------------|-----------|----------|----------|----------|----------|------|
| 1 | 20140911204120 | January | 1000 | 500 | 200 | 40 | A |
| 2 | 20140911204120 | February | 1000 | 500 | 200 | 40 | A |
| 3 | 20140911204120 | March | 1000 | 500 | 200 | 40 | A |
| 4 | 20140911204120 | April | 1000 | 500 | 200 | 40 | A |
| 5 | 20140911204120 | May | 1000 | 500 | 200 | 40 | A |
| 1 | 20140911204122 | February | 2000 | 1166 | 666 | 400 | R |
| 2 | 20140911204122 | March | 2000 | 1166 | 666 | 400 | R |
| 3 | 20140911204122 | April | 2000 | 1166 | 666 | 400 | R |
| 1 | 20140911204124 | February | 1400 | 900 | 600 | 440 | R |
| 2 | 20140911204124 | March | 1400 | 900 | 600 | 440 | R |
| 3 | 20140911204124 | April | 1400 | 900 | 600 | 440 | R |
| 4 | 20140911204124 | May | 1400 | 900 | 600 | 440 | R |
| 5 | 20140911204124 | June | 1400 | 900 | 600 | 440 | R |
| 1 | 20140911204126 | July | 2666 | 1833 | 1333 | 1066 | R |
| 2 | 20140911204126 | August | 2666 | 1833 | 1333 | 1066 | R |
| 3 | 20140911204126 | September | 2666 | 1833 | 1333 | 1066 | R |
| 1 | 20140911204128 | March | 1500 | 1083 | 833 | 700 | A |
| 2 | 20140911204128 | April | 1500 | 1083 | 833 | 700 | A |
| 3 | 20140911204128 | May | 1500 | 1083 | 833 | 700 | A |
| 4 | 20140911204128 | June | 1500 | 1083 | 833 | 700 | A |
| 5 | 20140911204128 | July | 1500 | 1083 | 833 | 700 | A |
| 6 | 20140911204128 | August | 1500 | 1083 | 833 | 700 | A |
You can join your table with a month table, to expend your data, that will be more efficient than recursive query.
SQLFiddle
;WITH months (month,month_name) AS
(select number,DATENAME(MONTH,cast(rtrim(20140000+number*100+1) as datetime))
from master..spt_values
where number between 1 and 12 AND TYPE='P'
)
SELECT Sr = ROW_NUMBER() OVER (PARTITION BY Number ORDER BY Number, month),
Number,
Month_name,
Amount_1 = Amount_1/(datediff(month,TRY_PARSE(from_date AS DATE),TRY_PARSE(to_date AS DATE))+1),
Amount_2 = Amount_2/(datediff(month,TRY_PARSE(from_date AS DATE),TRY_PARSE(to_date AS DATE))+1),
Amount_3 = Amount_3/(datediff(month,TRY_PARSE(from_date AS DATE),TRY_PARSE(to_date AS DATE))+1),
Amount_4 = Amount_4/(datediff(month,TRY_PARSE(from_date AS DATE),TRY_PARSE(to_date AS DATE))+1),
Type
FROM YourTable
JOIN Months
on month between datepart(month,TRY_PARSE(from_date AS DATE)) and datepart(month,TRY_PARSE(to_date AS DATE))

Resources