SQL Server group by key and sum over an xml column - sql-server

As stated in the title i have a table with a composed key (main key and position key) and a Value column that contain an XML (with a fixed schema).
The fixed XML appear like the subsequent :
<Data>
<ItemACount></ItemACount>
<ItemBCount></ItemBCount>
</Data>
Both ItemACount and ItemBCount represent positive integer number.
I would like group records that have the same main key (but different position key) then, for each group calculate the sum of each ItemACount and ItemBCount.
I write the SQL code as below :
SELECT
MainKey AS MainKey
SUM ( [Value].value('/Data/ItemACount/#value') ) AS TotalItemACount ,
SUM ( [Value].value('/Data/ItemBCount/#value') ) AS TotalItemBCount
FROM
[dbo].[tblItems]
GROUP BY
[MainKey]
But I get a syntax error:
Cannot find either column "Value" or the user-defined function or aggregate "Value.value", or the name is ambiguous.
I would like to understand which is the correct syntax.

Try this
SELECT
MainKey AS MainKey
SUM ( [Value].value(('/Data/ItemACount/#value)[1]', 'int')) AS TotalItemACount ,
SUM ( [Value].value(('/Data/ItemBCount/#value)[1]', 'int')) AS TotalItemBCount
FROM [dbo].[tblItems]
GROUP BY [MainKey]

Related

Can't perform an aggregate function on an expression containing an aggregate or a subquery

My table is created like this :
create table ##temp2(min_col1_value varchar (100))
create table ##temp1(max_col1_value varchar (100))
The table has values like this:
min_col1_value
-------------------
1
0
10
1
I'm trying to get the "frequency count of minimum length values" and expecting result as 3.
another example for maximum is :
max_col1_value
-------------------
1000
1234
10
1111
123
2345
I'm trying to get the "frequency count of maximum length values" and expecting result as 4.
When I'm running this query:
select count(min(len(convert(int,min_col1_value)))) from ##temp2 group
by min_col1_value
select count(max(len(convert(int,max_col1_value)))) from ##temp1 group by
max_col1_value
getting error as : Cannot perform an aggregate function on an expression containing an aggregate or a subquery.
How to get the desired result?
You can't aggregate twice in the same SELECT statement and, even if you could, your min(len()) will return a single value: 2 since your minimum field length of #temp2 is 2. Counting that will just give you 1 because there is only 1 value to count.
You are wanting to get the count of how many fields have that minimum length so you'll need something like:
SELECT count(*)
FROM #temp2
WHERE len(min_col1_value) IN (SELECT min(len(min_col1_value)) FROM #temp1)
That WHERE clause says, only count values in #temp2 where the length is equal to the minimum length of all the values in #temp2. This should return 3 based on your sample data.
The same logic can be applied to either table for min or max.
This should get you your desired results:
SELECT COUNT(*)
FROM ##temp2
WHERE LEN(min_col1_value) =
(
SELECT MIN(LEN(min_col1_value))
FROM ##temp2
)
SELECT COUNT(*)
FROM ##temp1
WHERE LEN(max_col1_value) =
(
SELECT MAX(LEN(max_col1_value))
FROM ##temp1
)

Merging same columns from a table with different conditions and counting specific field

I want to retrieve data from one table but the data is based on conditions.
so now i've added the count section to my query but its not executing and the following error occured:
Column 'DailyStockHistory.Stc_Item_Desc' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause
the query is :
Select
*
From
(
Select
dsh.Stc_Item_Desc, dsh.Imported_Date, dsh.Store_ID, dsh.Quantity_On_Hand,Count(inv.Sim_Network_Type) counterr
From DailyStockHistory dsh,
Temp_DailyInventory inv
Where Stc_Item_Desc = 'SAWA/QuickNet prepaid triosim'
And dsh.Store_ID Like 'S%'
And (Imported_Date = '3-15-2017' Or Imported_Date = '3-22-2017')
) AS Data
PIVOT
(
SUM(Quantity_On_Hand)
FOR Imported_Date IN ([3-15-2017],[3-22-2017])
) AS PIVOTData

Parse single column result set into multiple columns SQL

I've written a stored procedure that is returning a 2 column temp table, one ID column that is not unique, but a value between 2 and 12 to group values on. The other column is that actual data value column. I want to break out this 1 column table into a table into basically 1 table of 11 columns, 1 column for each dataset.
i'd like to have this parsed out into columns by ID. An identity column is not necessary since they will be unique to their own column. Something like;
Data2 Data3 Data4
102692... 103516.... 108408....
104114... 103476.... 108890....
and so on. I have tried using a While Loop through the datasets, but it's mainly getting these contained in 1 insert that is troubling me. I can't figure out how to say
While recordCount > 0
Begin
Insert into #tempTable(value1ID2,value1ID3,Value1ID4)
End
and then loop through value2ID2, value2ID3 etc.
If this isn't attainable that's fine i'll have to figure out a workaround, but the main reason i'm trying to do this is for a Report Builder dataset for a line chart that will eventually share a date grouping.
Since you need to aggregate string values, then you will need to use either the max or min aggregate function. The problem with that is it will return a single row for each column. In order to rerun multiple rows, then you will need to use a windowing function like row_number() to generate a unique value for each id/string combination. This will allow you to return multiple rows for each id:
select Data2 = [2], Data3 = [3], Data4 = [4]
from
(
select id, stringvalue,
row_number() over(partition by id order by stringvalue) seq
from yourtable
) d
pivot
(
max(stringvalue)
for id in ([2], [3], [4])
) piv

How to add values from other columns into next columns

I've pivoted a table and I have a base value, let's say 95%.
The schema is like
Name, BaseValue, Quarter1, Quarter2, etc..
West, 95% , 0.5% , -0.2% , ...
I'd like for Quarter 1 to be BaseValue+ Quarter1's initial value ie. 95.5%.
I'd like for Quarter 2 to be Quarter1 + Quarter2's initial value ie. 95.3%.
Here's the setup in SQLFiddle
http://sqlfiddle.com/#!3/78dd3/1
Unpivot, get the running totals, pivot back. Assuming the values are of a numeric type and the version used is SQL Server 2012, here's one way to implement that:
WITH UnpivotAndRunningTotals AS (
SELECT
Name,
Attr,
Value = SUM(Value) OVER (PARTITION BY Name ORDER BY Attr)
FROM atable
UNPIVOT (
Value FOR Attr IN (BaseValue, Quarter1, Quarter2, Quarter3, Quarter4)
) AS u
)
SELECT
Name,
BaseValue, Quarter1, Quarter2, Quarter3, Quarter4
FROM UnpivotAndRunningTotals
PIVOT (
MAX(Value) FOR Attr IN (BaseValue, Quarter1, Quarter2, Quarter3, Quarter4)
) AS p
;
With my comment above, I'll assume you want to do this and maintain the existing column values. The solution is a simple UPDATE using the + operator.
ALTER TABLE ADD [4Q13Total] FLOAT
UPDATE TABLE
SET [4Q13TOTAL] = [BASEOCC] + [4Q13]
You could also SELECT the values, if you'd like.
SELECT [BASEOCC] + [4Q13] AS [Q1Total]

Show histortical results from same table

I have a table called MyHistory my history have about 1000 rows in this table and the performance is crappy at best.
What I want to do is select rows showing the next row as a result. This is probably a bad example.
this is MyHistory structure ID int,DateTimeColumn datetime,ValueResult decimal (4,2)
my table has the following data
ID|DateTimeColumn|ValueResult
1|8/1/2005 1:01:29 PM|2
1|8/1/2006 1:01:29 PM|3
1|8/1/2007 1:01:29 PM|5
1|8/1/2008 1:01:29 PM|9
What I want to do is select out of this the following data
ID|DateTimeColumn|ValueResult|ChangeValue
1|8/1/2008 1:01:29 PM|9|4
1|8/1/2007 1:01:29 PM|5|2
1|8/1/2006 1:01:29 PM|3|1
1|8/1/2005 1:01:29 PM|2|
You'll notice that ID is = ID and the datetime column is now desc. Thats the easy part. But how do I make a self referencing table (in order to calculate the difference in value) based on which datetime comes next?
Thanks!
So, the task is:
to order records by DateTimeColumn descending,
to set sequence number for each record to identify next record,
to calculate required difference in value.
This is one of many possible solutions:
-- Use CTE to make intermediate table with sequence numbers - ranks
;WITH a (rank, ID, DateTimeColumn, ValueResult) AS
(
select rank() OVER (ORDER BY m.DateTimeColumn DESC) as rank, ID, DateTimeColumn, ValueResult
from MyHistory
)
-- Select all resulting columns
select a1.ID,
a1.DateTimeColumn,
a1.ValueResult,
a1.ValueResult - a2.ValueResult as ChangeValue -- Difference between current record and next one
from a a1
join a a2
on a2.rank = a1.rank + 1 -- linking next record to each one

Resources