I have a table with 3 text columns on which I want to do some mathematical calculations.
Table looks like below:
Date Column1 Column2 Column3
-----------------------------------------
2012-08-01 STABLE NEG STABLE
2012-08-02 NEG NEG STABLE
2012-08-03 STABLE STABLE STABLE
Want I want to achieve is
If 2/3 columns is equal to 'STABLE' then it returns 66% i.e. 2/3, as it is in the first row
If 1/3 columns is equal to 'STABLE' then it returns 33% i.e. 1/3, as it is in the second row
If 3/3 columns is equal to 'STABLE' then it returns 100% i.e. 3/3, as it is in the third row
I want to know how I can achieve this using SQL? I'm currently working on MSSQL Server 2008 R2.
I hope my question is clear enough.
So, something like this?
select
((case when col1 = 'Stable' then 1.0 else 0.0 end) +
(case when col2 = 'Stable' then 1.0 else 0.0 end) +
(case when col3 = 'Stable' then 1.0 else 0.0 end)) / 3.0
from yourtable
You can do some formatting on the output, but should be very close to what your looking for.
If those are the only choices for column values:
select ( Len( Column1 ) + Len( Column2 ) + Len( Column3 ) - 9 ) / 3 / 3.0 * 100.0 as 'Percent'
from Foo
The optimizer should handle the constant folding. They are shown for clarity.
Divine Comedy describes a place for people who write code like this.
Related
I want to normalize my SQL Server column before adding to datagridview, I did it in my SQL query. But I have a problem.
If the value fully dividing (for example 100/100=1) I don't want to see this value like 1.00000000000000. And if the value is not fully dividing (for example 3/7=0.42857142857), I want to round that value to two digits after rounding (0.43).
This is my code:
string q = "SELECT column1, column2, ((CONVERT(DECIMAL(18, 2), column3) / (SELECT MAX(column3) FROM tablename)) * 100) AS normalizedColumn3
FROM tablename .......;
I need to normalize column 3, it's values before normalize between 1 - 2000000. After normalize, values will be between 0.01 - 100.
Normalize formula:
column 3 = ((column 3 / maximum column 3) * 100)
Thank you for answers...
You'll have a small matter of data-loss if you only want two decimals. You'll need at least 5 decimals for values between 1 and 2,000,000
Example
Declare #YourTable table (Col3 float)
Insert into #YourTable values
(1),(1536),(1000000),(2000000)
Select A.*
,NewVal = convert(decimal(10,2), (Col3*100.0) / ( Select max(Col3) from #YourTable) )
From #YourTable A
Returns
Col3 NewVal
1 0.00 -- At decimal(10,5) you would see 0.00005
1536 0.08
1000000 50.00
2000000 100.00
I believe you can use ROUND ( numeric_expression , length [ ,function ] ) or SELECT ROUND(CAST (# AS decimal (#,#)),#); to round your decimal.
Here is more info on it :https://learn.microsoft.com/en-us/sql/t-sql/functions/round-transact-sql?view=sql-server-2017
I have a table "Items" with records that shows a progress through time of a single item. One of the columns is numeric value (DeltaLimit) showing price change compared to the starting point of the records.
I also have a defined #Limit variable. I need to select the record where DeltaLimit exceeds #Limit for the first time, and if the DeltaLimit exceeds multiples of the #Limit, I need to do the same.
Basically, I need the first row where DeltaLimit exceeds #Limit, the fist row where DeltaLimit exceeds 2*#Limit, the first row where DeltaLimit exceeds 3*#Limit etc.
Source data - #Limit = 0.5
Name | DeltaLimit
Ex1 | 0.4
Ex2 | 0.6
Ex3 | 0.9
Ex4 | 1.1
Ex5 | 1.3
Desired output
Name | DeltaLimit
Ex2 | 0.6
Ex4 | 1.1
The only thing I managed to do was to get the first row that exceeds the #Limit itself with the following select, but I have no idea how to get the rows the exceeds the following multiples of #Limit. Any help would be greatly aprreciated.
select * from Items
where DeltaLimit = (select top 1 DeltaLimit from Items where DeltaLimit < #Limit);
You can divide the DeltaLimit by the Limit and get your Row_Number using the result.
SELECT * FROM (
SELECT *,
ROW_NUMBER() OVER (PARTITION BY convert(int, DeltaLimit / #Limit) ORDER BY DeltaLimit) Rn
FROM Table1
WHERE DeltaLimit > #Limit
) t
WHERE t.Rn = 1
Rextester.com demo
You need to create a projection relation for the delta limit multipliers... basically a temporary table with values {.5, 1.0, ,1.5, 2.0} etc, up the highest needed by the data. I'll leave that as an exercise and just call it #ScaledLimits in the below code. Once you have it, you can use it like this:
SELECT results.*
FROM #ScaledLimits sl
CROSS APPLY (
SELECT TOP 1 *
FROM Items
WHERE Items.DeltaLimit > sl.DeltaLimit
ORDER BY Items.DeltaLimit
) results
See it in action here:
http://rextester.com/QIP53078
I have a tsql query that does calculations for percentages, and it calculates fine if the result is > 1, but returns 0 when it is less than 1. My calculation is like so:
create table dbo.#temptable (
total_sold int null,
#_in_ny int null,
percent_in_ny decimal(18,2)null
) on [primary]
insert into dbo.#temptable
select count(t.booknum) as totalsold, t.#_in_ny, 100 * t.#_in_ny / count(t.booknum)
from mytable t
That gives me:
total ny sales %sales in ny
650 4 0 ---- this should show up as 0.61 since 4 is .61% of 650
The problem is that SQL Server does integer division. The simplest solution is to use 100.0 as the constant:
insert into dbo.#temptable
select count(t.booknum) as totalsold, t.#_in_ny,
t.#_in_ny * 100.0 / count(t.booknum)
from mytable t
t.#_in_ny / count(t.booknum) - You are doing integer division here. Convert both of these to floats or decimals.
insert into dbo.#temptable
select count(t.booknum) as totalsold, t.#_in_ny, 100 * CONVERT(float,t.#_in_ny) / CONVERT(float, count(t.booknum))
from mytable t
EDIT: See Gordon's answer as well. While either solution works, his is certainly a bit more eloquent than mine.
How does SQL Server know to retrieve these values this way?
Key someMoney
----------- ---------------------
1 5.00
2 5.002
3 5.0001
Basically, I'm wondering how to know how many decimal places there are without much of a performance hit.
I want to get
Key someMoney places
----------- --------------------- ----------
1 5.00 2
2 5.002 3
3 5.0001 4
Money has 4 decimal places....it's a fixed-point data type.
http://msdn.microsoft.com/en-us/library/ms179882.aspx
Is SQL Server 'MONEY' data type a decimal floating point or binary floating point?
So this is a huge ugly hack, but it will give you the value you're looking for...
DECLARE #TestValue MONEY
SET #TestValue = 1.001
DECLARE #TestString VARCHAR(50)
SET #TestString = REPLACE(RTRIM(REPLACE(CONVERT(VARCHAR, CONVERT(DECIMAL(10,4), #TestValue)), '0', ' ')), ' ', '0')
SELECT LEN(#TestString) - CHARINDEX('.', #TestString) AS Places
This produces the correct results, but I'm not sure if it performs well enough for you and I haven't tried it with data other than the examples you listed:
;
with money_cte ([Key], [someMoney])
as
(
select 1, cast(5.00 as money)
union
select 2, cast(5.002 as money)
union
select 3, cast(5.0001 as money)
)
select [Key], [someMoney], abs(floor(log10([someMoney] - round([someMoney], 0, 1)))) as places
from money_cte
where [someMoney] - round([someMoney], 0, 1) <> 0
union
select [Key], [someMoney], 2 as places
from money_cte
where [someMoney] - round([someMoney], 0, 1) = 0
The client is formatting that. SQL Server SSMS or whatever. SQL Server is returning a full money value in the data stream and it takes a full 8 bytes. (http://msdn.microsoft.com/en-us/library/cc448435.aspx). If you have SQL Server convert to varchar, it defaults to 2 decimal places
Notice that the Stack Overflow data browser doesn't even show the same results you have:
https://data.stackexchange.com/stackoverflow/q/111288/
;
with money_cte ([Key], [someMoney])
as
(
select 1, cast(5.00 as money)
union
select 2, cast(5.002 as money)
union
select 3, cast(5.0001 as money)
)
select *
, CONVERT(varchar, someMoney) AS varchar_version
, CONVERT(varchar, someMoney, 2) AS varchar_version2
FROM money_cte
I have the following table in SQL Server Express edition:
Time Device Value
0:00 1 2
0:01 2 3
0:03 3 5
0:03 1 3
0:13 2 5
0:22 1 7
0:34 3 5
0:35 2 6
0:37 1 5
The table is used to log the events of different devices which are reporting their latest values. What I'd like to do is to prepare the data in a way that I'd present the average data through time scale and eventually create a chart using this data. I've manipulated this example data in Excel in the following way:
Time Average value
0:03 3,666666667
0:13 4,333333333
0:22 5,666666667
0:34 5,666666667
0:35 6
0:37 5,333333333
So, at time 0:03 I need to take latest data I have in the table and calculate the average. In this case it's (3+3+5)/3=3,67. At time 0:13 the steps would be repeated, and again at 0:22,...
As I'd like to leave the everything within the SQL table (I wouldn't like to create any service with C# or similar which would grab the data and store it into some other table)
I'd like to know the following:
is this the right approach or should I use some other concept of calculating the average for charting data preparation?
if yes, what's the best approach to implement it? Table view, function within the database, stored procedure (which would be called from the charting API)?
any suggestions on how to implement this?
Thank you in advance.
Mark
Update 1
In the mean time I got one idea how to approach to this problem. I'd kindly ask you for your comments on it and I'd still need some help in getting the problem resolved.
So, the idea is to crosstab the table like this:
Time Device1Value Device2Value Device3Value
0:00 2 NULL NULL
0:01 NULL 3 NULL
0:03 3 NULL 5
0:13 NULL 5 NULL
0:22 7 NULL NULL
0:34 NULL NULL 5
0:35 NULL 6 NULL
0:37 5 NULL NULL
The query for this to happen would be:
SELECT Time,
(SELECT Stock FROM dbo.Event WHERE Time = S.Time AND Device = 1) AS Device1Value,
(SELECT Stock FROM dbo.Event WHERE Time = S.Time AND Device = 2) AS Device2Value,
(SELECT Stock FROM dbo.Event WHERE Time = S.Time AND Device = 3) AS Device3Value
FROM dbo.Event S GROUP BY Time
What I'd still need to do is to write a user defined function and call it within this query which would write last available value in case of NULL and if the last available value doesn't exist it would leave NULL value. With this function I'd get the following results:
Time Device1Value Device2Value Device3Value
0:00 2 NULL NULL
0:01 2 3 NULL
0:03 3 3 5
0:13 3 5 5
0:22 7 5 5
0:34 7 5 5
0:35 7 6 5
0:37 5 6 5
And by having this results I'd be able to calculate the average for each time by only SUMing up the 3 relevant columns and dividing it by count (in this case 3). For NULL I'd use 0 value.
Can anybody suggest how to create a user defined function for replacing NULL values with latest value?
Update 2
Thanks Martin.
This query worked but it took almost 21 minutes to go through the 13.576 lines which is far too much.
The final query I used was:
SELECT Time,
(SELECT TOP 1 Stock FROM dbo.Event e WHERE e.Time <= S.Time AND Device = 1 ORDER BY e.Time DESC) AS Device1Value,
(SELECT TOP 1 Stock FROM dbo.Event e WHERE e.Time <= S.Time AND Device = 2 ORDER BY e.Time DESC) AS Device2Value,
(SELECT TOP 1 Stock FROM dbo.Event e WHERE e.Time <= S.Time AND Device = 3 ORDER BY e.Time DESC) AS Device3Value
FROM dbo.Event S GROUP BY Time
but I've extended it to 10 devices.
I agree that this is not the best way to do it. Is there any other way to prepare the data for the average calculation because this takes just too much of the processing.
Here's one way. It uses the "Quirky Update" approach to filling in the gaps. This relies on an undocumented behaviour so you may prefer to use a cursor for this.
DECLARE #SourceData TABLE([Time] TIME, Device INT, value FLOAT)
INSERT INTO #SourceData
SELECT '0:00',1,2 UNION ALL
SELECT '0:01',2,3 UNION ALL
SELECT '0:03',3,5 UNION ALL
SELECT '0:03',1,3 UNION ALL
SELECT '0:13',2,5 UNION ALL
SELECT '0:22',1,7 UNION ALL
SELECT '0:34',3,5 UNION ALL
SELECT '0:35',2,6 UNION ALL
SELECT '0:37',1,5
CREATE TABLE #tmpResults
(
[Time] Time primary key,
[1] FLOAT,
[2] FLOAT,
[3] FLOAT
)
INSERT INTO #tmpResults
SELECT [Time],[1],[2],[3]
FROM #SourceData
PIVOT ( MAX(value) FOR Device IN ([1],[2],[3])) AS pvt
ORDER BY [Time];
DECLARE #1 FLOAT, #2 FLOAT, #3 FLOAT
UPDATE #tmpResults
SET #1 = [1] = ISNULL([1],#1),
#2 = [2] = ISNULL([2],#2),
#3 = [3] = ISNULL([3],#3)
SELECT [Time],
(SELECT AVG(device)
FROM (SELECT [1] AS device
UNION ALL
SELECT [2]
UNION ALL
SELECT [3]) t) AS [Average value]
FROM #tmpResults
DROP TABLE #tmpResults
So one of the possible solutions which I found is far more efficient (less than a second for 14.574 lines). I haven't yet had time to review the results in details but on the first hand it looks promising. This is the code for the 3 device example:
SELECT Time,
SUM(CASE MAC WHEN '1' THEN Stock ELSE 0 END) Device1Value,
SUM(CASE MAC WHEN '2' THEN Stock ELSE 0 END) Device1Value,
SUM(CASE MAC WHEN '3' THEN Stock ELSE 0 END) Device1Value,
FROM dbo.Event
GROUP BY Time
ORDER BY Time
In any case I'll test the code provided by Martin to see if it makes any difference to the results.