1st of all I am working on SQL Server 2012 Express 64-bit.
I am trying to convert a varchar field to decimal and multiply to another varchar too.
This is the code I am trying to run on a SELECT query:
(CONVERT(decimal(12,2), COALESCE(Lineas.PARAM1, 0.00))) * (CONVERT(decimal(12,2), COALESCE(Lineas.PARAM2, 0.00))) AS 'MULTIPARAM'
Where PARAM1 and PARAM2 are varchar.
The numbers on this fields are 1 and 2. So simple. I want it to return 3.00 but the shown error is:
Mens. 8114, Nivel 16, Estado 5, LĂnea 1
Error al convertir el tipo de datos varchar a numeric.
I just want to get the result of PARAM1 * PARAM2 even if PARAM1 or PARAM2 are null (being converted to 0) or if they're decimal (separated with dot, for example: 100.5)
I don't get why it isn't working... Thanks!
FOUND THE ERROR:
COALESCE(Lineas.PARAM1, 0.00) makes it fail "converting varchar to numeric"; and COALESCE(Lineas.PARAM1, 0) makes it fail "converting varchar '100.5' to int." (which is not my intention anyway)
How could I make this "work"?
SOLUTION BY JATIN:
SELECT (COALESCE(CONVERT(decimal(12,2), NULLIF(REPLACE(Lineas.PARAM1,' ',''),'')), 0.00)) * (COALESCE(CONVERT(decimal(12,2), NULLIF(REPLACE(Lineas.PARAM2,' ',''),'')), 0.00)) AS 'MULTIPARAM'
what i did is first REPLACE all spaces and then if it is empty string then set it to NULL, rest is the same
Try this...
SELECT (COALESCE(CONVERT(decimal(12,2), Lineas.PARAM1), 0.00)) * (COALESCE(CONVERT(decimal(12,2), Lineas.PARAM2), 0.00)) AS 'MULTIPARAM'
the issue is here..
SELECT COALESCE('1', 0.00)
if still you get error "Error converting data type varchar to numeric", then your data has some characters else than 0-9.
You're going to kick yourself.
Error message in English;
Msg 8115, Level 16, State 8, Line 12
Arithmetic overflow error converting varchar to data type numeric.
Sample Data;
IF OBJECT_ID('tempdb..#TestData') IS NOT NULL DROP TABLE #TestData
GO
CREATE TABLE #TestData (ID int, Field1 varchar(10), Field2 varchar(10))
INSERT INTO #TestData
VALUES
(1,1,1)
,(2,1,2)
,(3,1,3)
,(4,1,4)
,(5,2,1)
,(6,2,2)
,(7,2,3)
Query (remove the decimal places from your coalesce);
SELECT
ID
,Field1
,Field2
,(CONVERT(decimal(12,2), COALESCE(a.Field1, 0))) * (CONVERT(decimal(12,2), COALESCE(a.Field2, 0))) AS 'MULTIPARAM'
FROM #TestData a
Result;
ID Field1 Field2 MULTIPARAM
1 1 1 1.0000
2 1 2 2.0000
3 1 3 3.0000
4 1 4 4.0000
5 2 1 2.0000
6 2 2 4.0000
7 2 3 6.0000
If you want the result to two decimals then use this;
SELECT
ID
,Field1
,Field2
,CONVERT(decimal(12,2),(CONVERT(decimal(12,2), COALESCE(a.Field1, 0))) * (CONVERT(decimal(12,2), COALESCE(a.Field2, 0)))) AS 'MULTIPARAM'
FROM #TestData a
Which gives these results;
ID Field1 Field2 MULTIPARAM
1 1 1 1.00
2 1 2 2.00
3 1 3 3.00
4 1 4 4.00
5 2 1 2.00
6 2 2 4.00
7 2 3 6.00
If you have decimals in the varchar field, then give this a go;
CONVERT(decimal(12,2),COALESCE(CAST(a.Field1 AS decimal(12,2)), 0) * COALESCE(CAST(a.Field2 AS decimal(12,0)), 0)) AS 'MULTIPARAM'
Related
I have a bunch of production orders and I'm trying to group by within a datetime range, then count the quantity within that range. For example, I want to group from 2230 to 2230 each day.
PT.ActualFinish is datetime (eg. if PT.ActualFinish is 2020-05-25 23:52:30 then it would be counted on the 26th May instead of the 25th)
Currently it's grouped by date (midnight to midnight) as opposed to the desired 2230 to 2230.
GROUP BY CAST(PT.ActualFinish AS DATE)
I've been trying to reconcile some DATEADD with the GROUP without success. Is it possible?
Just add 1.5 hours (90 minutes) and then extract the date:
group by convert(date, dateadd(minute, 90, pt.acctualfinish))
For this kind of thing you can use a function I created called NGroupRangeAB (code below) which can be used to create groups over values with an upper and lower bound.
Note that this:
SELECT f.*
FROM core.NGroupRangeAB(0,1440,12) AS f
ORDER BY f.RN;
Returns:
RN GroupNumber Low High
--- ------------ ------ -------
0 1 0 120
1 2 121 240
2 3 241 360
3 4 361 480
4 5 481 600
5 6 601 720
6 7 721 840
7 8 841 960
8 9 961 1080
9 10 1081 1200
10 11 1201 1320
11 12 1321 1440
This:
SELECT
f.GroupNumber,
L = DATEADD(MINUTE,f.[Low]-SIGN(f.[Low]),CAST('00:00:00.0000000' AS TIME)),
H = DATEADD(MINUTE,f.[High]-1,CAST('00:00:00.0000000' AS TIME))
FROM core.NGroupRangeAB(0,1440,12) AS f
ORDER BY f.RN;
Returns:
GroupNumber L H
------------- ---------------- ----------------
1 00:00:00.0000000 01:59:00.0000000
2 02:00:00.0000000 03:59:00.0000000
3 04:00:00.0000000 05:59:00.0000000
4 06:00:00.0000000 07:59:00.0000000
5 08:00:00.0000000 09:59:00.0000000
6 10:00:00.0000000 11:59:00.0000000
7 12:00:00.0000000 13:59:00.0000000
8 14:00:00.0000000 15:59:00.0000000
9 16:00:00.0000000 17:59:00.0000000
10 18:00:00.0000000 19:59:00.0000000
11 20:00:00.0000000 21:59:00.0000000
12 22:00:00.0000000 23:59:00.0000000
Now for a real-life example that may help you:
-- Sample Date
DECLARE #table TABLE (tm TIME);
INSERT #table VALUES ('00:15'),('11:20'),('21:44'),('09:50'),('02:15'),('02:25'),
('02:31'),('23:31'),('23:54');
-- Solution:
SELECT
GroupNbr = f.GroupNumber,
TimeLow = f2.L,
TimeHigh = f2.H,
Total = COUNT(t.tm)
FROM core.NGroupRangeAB(0,1440,12) AS f
CROSS APPLY (VALUES(
DATEADD(MINUTE,f.[Low]-SIGN(f.[Low]),CAST('00:00:00.0000000' AS TIME)),
DATEADD(MINUTE,f.[High]-1,CAST('00:00:00.0000000' AS TIME)))) AS f2(L,H)
LEFT JOIN #table AS t
ON t.tm BETWEEN f2.L AND f2.H
GROUP BY f.GroupNumber, f2.L, f2.H;
Returns:
GroupNbr TimeLow TimeHigh Total
-------------------- ---------------- ---------------- -----------
1 00:00:00.0000000 01:59:00.0000000 1
2 02:00:00.0000000 03:59:00.0000000 3
3 04:00:00.0000000 05:59:00.0000000 0
4 06:00:00.0000000 07:59:00.0000000 0
5 08:00:00.0000000 09:59:00.0000000 1
6 10:00:00.0000000 11:59:00.0000000 1
7 12:00:00.0000000 13:59:00.0000000 0
8 14:00:00.0000000 15:59:00.0000000 0
9 16:00:00.0000000 17:59:00.0000000 0
10 18:00:00.0000000 19:59:00.0000000 0
11 20:00:00.0000000 21:59:00.0000000 1
12 22:00:00.0000000 23:59:00.0000000 2
Note that an inner join will eliminate the 0-count rows.
CREATE FUNCTION core.NGroupRangeAB
(
#min BIGINT, -- Group Number Lower boundary
#max BIGINT, -- Group Number Upper boundary
#groups BIGINT -- Number of groups required
)
/*****************************************************************************************
[Purpose]:
Creates an auxilliary table that allows for grouping based on a given set of rows (#rows)
and requested number of "row groups" (#groups). core.NGroupRangeAB can be thought of as a
set-based, T-SQL version of Oracle's WIDTH_BUCKET, which:
"...lets you construct equiwidth histograms, in which the histogram range is divided into
intervals that have identical size. (Compare with NTILE, which creates equiheight
histograms.)" https://docs.oracle.com/cd/B19306_01/server.102/b14200/functions214.htm
See usage examples for more details.
[Author]:
Alan Burstein
[Compatibility]:
SQL Server 2008+
[Syntax]:
--===== Autonomous
SELECT ng.*
FROM dbo.NGroupRangeAB(#rows,#groups) AS ng;
[Parameters]:
#rows = BIGINT; the number of rows to be "tiled" (have group number assigned to it)
#groups = BIGINT; requested number of tile groups (same as the parameter passed to NTILE)
[Returns]:
Inline Table Valued Function returns:
GroupNumber = BIGINT; a row number beginning with 1 and ending with #rows
Members = BIGINT; Number of possible distinct members in the group
Low = BIGINT; the lower-bound range
High = BIGINT; the Upper-bound range
[Dependencies]:
core.rangeAB (iTVF)
[Developer Notes]:
1. An inline derived tally table using a CTE or subquery WILL NOT WORK. NTally requires
a correctly indexed tally table named dbo.tally; if you have or choose to use a
permanent tally table with a different name or in a different schema make sure to
change the DDL for this function accordingly. The recomended number of rows is
1,000,000; below is the recomended DDL for dbo.tally. Note the "Beginning" and "End"
of tally code.To learn more about tally tables see:
http://www.sqlservercentral.com/articles/T-SQL/62867/
2. For best results a P.O.C. index should exists on the table that you are "tiling". For
more information about P.O.C. indexes see:
http://sqlmag.com/sql-server-2012/sql-server-2012-how-write-t-sql-window-functions-part-3
3. NGroupRangeAB is deterministic; for more about deterministic and nondeterministic functions
see https://msdn.microsoft.com/en-us/library/ms178091.aspx
[Examples]:
-----------------------------------------------------------------------------------------
--===== 1. Basic illustration of the relationship between core.NGroupRangeAB and NTILE.
-- Consider this query which assigns 3 "tile groups" to 10 rows:
DECLARE #rows BIGINT = 7, #tiles BIGINT = 3;
SELECT t.N, t.TileGroup
FROM ( SELECT r.RN, NTILE(#tiles) OVER (ORDER BY r.RN)
FROM core.rangeAB(1,#rows,1,1) AS r) AS t(N,TileGroup);
Results:
N TileGroup
--- ----------
1 1
2 1
3 1
4 2
5 2
6 3
7 3
To pivot these "equiheight histograms" into "equiwidth histograms" we could do this:
DECLARE #rows BIGINT = 7, #tiles BIGINT = 3;
SELECT TileGroup = t.TileGroup,
[Low] = MIN(t.N),
[High] = MAX(t.N),
Members = COUNT(*)
FROM ( SELECT r.RN, NTILE(#tiles) OVER (ORDER BY r.RN)
FROM core.rangeAB(1,#rows,1,1) AS r) AS t(N,TileGroup);
GROUP BY t.TileGroup;
Results:
TileGroup Low High Members
---------- ---- ----- -----------
1 1 3 3
2 4 5 2
3 6 7 2
This will return the same thing at a tiny fraction of the cost:
SELECT TileGroup = ng.GroupNumber,
[Low] = ng.[Low],
[High] = ng.[High],
Members = ng.Members
FROM core.NGroupRangeAB(1,#rows,#tiles) AS ng;
--===== 2.1. Divide 25 Rows into 3 groups
DECLARE #min BIGINT = 1, #max BIGINT = 25, #groups BIGINT = 4;
SELECT ng.GroupNumber, ng.Members, ng.low, ng.high
FROM core.NGroupRangeAB(#min,#max,#groups) AS ng;
--===== 2.2. Assign group membership to another table
DECLARE #min BIGINT = 1, #max BIGINT = 25, #groups BIGINT = 4;
SELECT
ng.GroupNumber, ng.low, ng.high, s.WidgetId, s.Price
FROM (VALUES('a',$12),('b',$22),('c',$9),('d',$2)) AS s(WidgetId,Price)
JOIN core.NGroupRangeAB(#min,#max,#groups) AS ng
ON s.Price BETWEEN ng.[Low] AND ng.[High]
ORDER BY ng.RN;
Results:
GroupNumber low high WidgetId Price
------------ ---- ----- --------- ---------------------
1 1 7 d 2.00
2 8 13 a 12.00
2 8 13 c 9.00
4 20 25 b 22.00
-----------------------------------------------------------------------------------------
[Revision History]:
Rev 00 - 20190128 - Initial Creation; Final Tuning - Alan Burstein
****************************************************************************************/
RETURNS TABLE WITH SCHEMABINDING AS RETURN
SELECT
RN = r.RN, -- Sort Key
GroupNumber = r.N2, -- Bucket (group) number
Members = g.S-ur.N+1, -- Count of members in this group
[Low] = r.RN*g.S+rc.N+ur.N, -- Lower boundary for the group (inclusive)
[High] = r.N2*g.S+rc.N -- Upper boundary for the group (inclusive)
FROM core.rangeAB(0,#groups-1,1,0) AS r -- Range Function
CROSS APPLY (VALUES((#max-#min)/#groups,(#max-#min)%#groups)) AS g(S,U) -- Size, Underflow
CROSS APPLY (VALUES(SIGN(SIGN(r.RN-g.U)-1)+1)) AS ur(N) -- get Underflow
CROSS APPLY (VALUES(#min+r.RN-(ur.N*(r.RN-g.U)))) AS rc(N); -- Running Count
GO
I have 2 tables in MS SQL Server.
Table 1
ID Name
-------------
10 Series
11 Movie
12 Music
13 Other
Table 2
ID IDCatg Value
---------------------------
1 10 Hannibal
2 10 Blacklist
3 10 POI
4 11 Hitman
5 11 SAW
6 11 Spider man
7 12 taylor swift
8 12 britney spears
I want to select by IDCatg in Table 2 and create a new column in Table 1 like this:
IDCatg Name Value
--------------------------------------------
10 Series Hannibal-Blacklist-POI
11 Movie Hitman-SAW-Spider man
12 Music taylor swift-britney spears
How can I do this by view?
You can do it using STUFF:
SELECT T21.IDCatg, T1.Name,
[Value] = STUFF((
SELECT '-' + T22.[Value]
FROM Table2 T22
WHERE T21.IDCatg = T22.IDCatg
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 1, '')
FROM Table2 T21 JOIN
Table1 T1 ON T1.ID=T21.IDCatg
GROUP BY T21.IDCatg,T1.Name
Result:
IDCatg Name Value
---------------------------------------------
10 Series Hannibal-Blacklist-POI
11 Movie Hitman-SAW-Spider man
12 Music taylor swift-britney spears
Sample result in SQL Fiddle
EDIT:
When the type of Value is int, you can cast it to varchar:
[Value] = STUFF((
SELECT '-' + CAST(T22.[Value] AS Varchar(MAX))
I have a table like this:
create table Stuff
(StuffID int identity not null,
StuffPrice decimal (8,2) not null,
StuffSold decimal (8,2) not null,
StuffPriceTime datetime not null)
I'd like to do a query that shows, for the recordset I'm returning, the number of times that StuffPrice was greater than StuffSold. Is there any SQL batch way of doing this? Something like:
Select
StuffID,
StuffPrice,
StuffSold,
StuffPriceTime,
SomeFunction(StuffPrice,StuffSold)
From Stuff
Where I'd see a resultset that looks something like:
[StuffID] - [StuffPrice] - [StuffSold] - [StuffPriceTime] - [True/False Result]
Now that I'm writing this out, I suppose I could do a UDF scalar function but I've heard the performance can be terrible for those.
In general, any kind of difference between the columns that can be presented as logical expression (predicate) can be expressed as flag - CASE WHEN predicate=true THEN 1 ELSE 0 END and then summarized to the final result.
For example:
create table Stuff
(StuffID int identity not null,
StuffPrice decimal (8,2) not null,
StuffSold decimal (8,2) not null,
StuffPriceTime datetime not null)
insert into Stuff (StuffPrice, StuffSold, StuffPriceTime) values
(10.0, 11.0, getdate()), --> lower
(12.0, 11.0, getdate()), --> greater
(17.0, 18.0, getdate()), --> lower
(17.0, 16.0, getdate()); --> greater
Select
StuffID,
StuffPrice,
StuffSold,
StuffPriceTime,
sum(case when StuffPrice > StuffSold then 1 else 0 end) over() [number of times]
From Stuff
Result:
StuffID StuffPrice StuffSold StuffPriceTime [number of times]
-----------------------------------------------------------------
1 10.00 11.00 2015-01-01 2
2 12.00 11.00 2015-01-01 2
3 17.00 18.00 2015-01-01 2
4 17.00 16.00 2015-01-01 2
I'm having trouble with my database because its got auditing worked in and things are hard. I need to compare the current change with the last change so I thought I'd include a grouping column. But when I run my code, some of the column values are . It goes 1,2,3,4,5,6,7,8,9, . What the ... idon'tevenknow ?
This is what I'm doing:
DECLARE #Person char(11), #DonationYTD decimal(10, 2), #OldPerson char(11), #OldDonationYTD decimal(10, 2), #Group int;
SET #Group=1;
DECLARE TempCursor CURSOR FOR Select PersonID, DonationYTD FROM MyTable;
OPEN TempCursor;
FETCH NEXT FROM TempCursor INTO #Person, #DonationYTD ;
WHILE ##FETCH_STATUS=0
BEGIN
IF( #Person != #OldPerson)
SET #Group=1;
IF( #Person = #OldPerson AND #DonationYTD!=#OldDonationYTD)
SET #Group=#Group+1;
UPDATE MyTable SET CHANGEGROUP=#Group WHERE PersonID=#Person AND DonationYTD=#DonationYTD;
SET #OldPerson = #Person;
SET #OldDonationYTD = #DonationYTD;
FETCH NEXT FROM TempCursor INTO #Person, #DonationYTD ;
END
CLOSE TempCursor;
DEALLOCATE TempCursor;
SELECT PersonID, DonationYTD, Changegroup FROM MyTable
1 15.00 1
1 15.00 1
1 20.00 1
2 3.00 1
2 4.00 2
2 15.00 3
2 8.00 4
2 4.00 5
2 15.00 6
2 3.00 7
2 3.00 7
2 9.00 8
2 9.00 8
2 10.00 9
2 14.00 *
2 14.00 *
If I try to do anything with Changegroup it tells me it can't convert varchar symbol * to integer.
Why am I getting an asterisk? How can I fix it?
You are encountering the issue described here
When integers are implicitly converted to a character data type, if
the integer is too large to fit into the character field, SQL Server
enters ASCII character 42, the asterisk (*).
From which I deduce your column must be [var]char(1). This odd behaviour does not occur for the newer n[var]char types, as discussed here.
The fix would be to change the column datatype. Ideally to a numeric datatype such as int but if it must be string at least one long enough to hold the intended contents.
The following is how I have created the table:
CREATE TABLE #tmp
(
[Counter] int
,Per date not null
,Cam float
,CamMeg float
,Hfx float
,HfxMet float
,TorMetric float
)
The following is how I call the table later on in my script:
SELECT
((ROW_NUMBER() over(order by Per desc)-1)/#Avg)+1 as [Counter], Per,
Cam,
AVG(CamMetric) as CamMet,
HfxMe,
FROM #tmp
GROUP BY [counter] ;
DROP TABLE #tmp
The following are the errors that I get:
Msg 8120, Level 16, State 1, Line 175
Column '#tmp.Per' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
Msg 8120, Level 16, State 1, Line 175
Column '#tmp.Per' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
What am I doing incorrectly?
The data looks like the following
counter --- per ---Cam --- HfxMet ---......
1 2012-02-09 3 16
1 2012-02-24 4 12
1 2012-03-04 2 15
2 2012-03-15 1 18
2 2012-03-30 6 20
2 2012-04-07 10 6
3 2012-04-28 8 3
Now I want to add two more columns called CamMetricAvg and HfxMetric that will look at all counters that are 1 and then get the CamMetric and HfxMetric values respectively and give the average and put that on each Like the following:
counter --- per ---Cam --- CamMt ---
1 2012-02-09 3 3
1 2012-02-24 4 3
1 2012-03-04 2 3
2 2012-03-15 1 5.6
2 2012-03-30 6 5.6
2 2012-04-07 10 5.6
3 2012-04-28 8 8
SELECT [Counter], Period,
CamMetric,
AvgCamMetric = AVG(CamMetric) OVER(PARTITION BY Counter),
HfxMetric,
AvgHfxMetric = AVG(HfxMetric) OVER(PARTITION BY Counter)
... repeat for other metrics ...
FROM #tmpTransHrsData
GROUP BY [Counter], Period, CamMetric, HfxMetric;
SQLFiddle example