Microsoft T-SQL: Can I CONVERT smallint to varchar - sql-server

It may just be me, but... Despite the fact that most sql developers may consider cast & convert to be very basic stuff, and that may be true, I find Microsoft's documentation page on CAST & CONVERT to be one of the most hideous, not-intuitively-laid-out, hard to understand things I have ever seen. Much of their documentation is great. Like constantly trying to blend the entire page into a mix of both cast and convert, jumping back and forth in each sentence... rather than dealing with them separately. And who puts the target_type as the first argument? Putting the expression as the first argument would be more intuitive - and follow the other 99% of numerous programming languages' syntax. UH
MS says that I can only convert to 3 data types: (well actually I'm not really sure if this applies to both CAST and CONVERT, since they ARE, in fact, different... But according to the layout of that webpage, it apparently applies equally to both - even though I already know for a fact that it is not true for CAST, which I use much more frequently).
It says: "Is the target data type. This includes xml, bigint, and sql_variant"
Putting aside for the moment the fact that I CAST things as many other datatypes all the time (date, varchar),
My immediate question is: if I can only CONVERT to those data types, then why does this work?
select CONVERT(varchar(200), cast(50 as smallint))
And finally, I'd like to run an INSERT that will be getting a smallint and putting it into a varchar(200) column.
All I'm trying to do is avoid any failures, so maybe I don't really "need" to convert or cast over to varchar, but any commments on
answer on what is my apparent misunderstanding about the CONVERT documentation
or
how to safely convert it to insert to varchar
are welcome. As long as you're not just overly unpleasant, since there are always those MS fans who get hot under the collar at all critiques of MS .. :|

Yes, you can convert from smallint to varchar.
1) answer on what is my apparent misunderstanding about the CONVERT
documentation
This may be product of general lack of understanding on what data types are, how can they be converted from one type to another and equally important; what styles are when it comes to the aesthetic representation of a data type.
CAST is an explicit cast operation with no style options.
CONVERT is also an explicit cast that gives you the ability to specify a style for the output.
The documentation clearly states:
Implicit Conversions
Implicit conversions are those conversions that occur without
specifying either the CAST or CONVERT function. Explicit conversions
are those conversions that require the CAST or CONVERT function to be
specified. The following illustration shows all explicit and implicit
data type conversions that are allowed for SQL Server system-supplied
data types. These include xml, bigint, and sql_variant. There is no
implicit conversion on assignment from the sql_variant data type, but
there is implicit conversion to sql_variant.
For your second question
2) how to safely convert it to insert to varchar
Depending of what you mean by safe. Converting to varchar is the convertion that most likely succeed. But whenever to cast to any toher datatype you are intrinsically changing the very nature of the data and will lose precision when casting to smaller types (or applying styles).
The documentation clearly states:
Truncating and Rounding Results
When you convert character or binary expressions (char, nchar,
nvarchar, varchar, binary, or varbinary) to an expression of a
different data type, data can be truncated, only partially displayed,
or an error is returned because the result is too short to display.
Conversions to char, varchar, nchar, nvarchar, binary, and varbinary
are truncated, except for the conversions shown in the following
table.
in other words, casting is never safe.

Numbers always get silently truncated for me. I would propose:
Option 1
Compare the converted value with the original value.
DECLARE #ORIGINAL DECIMAL(13,2) = -99999999999.99 --
DECLARE #EXPECTED VARCHAR(15) = ''
SELECT #EXPECTED = CONVERT(VARCHAR(15),#ORIGINAL)
IF CONVERT(DECIMAL(13,2),#EXPECTED) != #ORIGINAL SELECT 'Ooops'
Option 2
Make sure that all possible values will fit in target varchar.
Decimal(13,2). Widest number possible will be "-99999999999.99" needs varchar(15):
13 chars for digits
1 char for decimal separator
1 char for minus sign
Smallint stores 2 bytes, from "-32768" to "32767", needs varchar(6):
- 5 chars for digits
- 1 char for minus sign
Not sure if you need chars for thousands separators, or if you can change it via settings.

Related

SQL Server cast() vs convert() [duplicate]

This question already has answers here:
T-SQL Cast versus Convert
(7 answers)
Closed 3 years ago.
I am very confused about the exact difference between the cast() function and the convert() function other than the syntax of course so that I can efficiently decide when to use which.
From this link :
CAST is an ANSI standard while CONVERT is a specific function in the SQL server. There are also differences when it comes to what a particular function can and cannot do.
For example, a CONVERT function can be used for formatting purposes especially for date/time, data type, and money/data type. Meanwhile, CAST is used to remove or reduce format while still converting. Also, CONVERT can stimulate set date format options while CAST cannot do this function.
CAST is also the more portable function of the two. It means that the CAST function can be used by many databases. CAST is also less powerful and less flexible than CONVERT. On the other hand, CONVERT allows more flexibility and is the preferred function to use for data, time values, traditional numbers, and money signifiers. CONVERT is also useful in formatting the data’s format.
CAST functions also restore the decimals and numerical values to integers while converting. It also can be used to truncate the decimal portion or value of an integer.

Why does DECIMAL behave like FLOAT?

Even though DECIMAL is an exact numeric type (unlike FLOAT, which is approximate), it behaves rather strangely in the following example:
DECLARE #DECIMAL_VALUE1 DECIMAL(20,9) = 504.70 / 0.151562
DECLARE #DECIMAL_VALUE2 DECIMAL(20,0) = 504.70 / 0.151562
DECLARE #INTEGER_VALUE INT = 504.70 / 0.151562
SELECT
#DECIMAL_VALUE1 AS DECIMAL_VALUE1, -- 3329.990366978
#DECIMAL_VALUE2 AS DECIMAL_VALUE2, -- 3330
#INTEGER_VALUE AS INTEGER_VALUE -- 3329
A value other than 3329 causes a bug in our application. Making the variable type an INTEGER solved our issue, but I cannot get my head around as to why it was caused in the first place.
You asked, "Why it was caused in the first place":
To know why you need to understand the nature of each datatype and how it operates within SQL Server.
Integer math truncates decimals (no rounding, same as "FLOOR" function (which is why you get 3329)).
Decimal with 0 places rounds (which is why you get 3330 as 3329.99 rounds up)
Decimal with precision/scale rounds to Nth scale (which is why you get 3329.990366978...).
So this isn't unexpected behavior, it's expected given the datatypes involved. It just may have been unanticipated behavior. The nuances of each datatype can be problematic until one runs into them.
I'll choose to ignore the float comment as it is not germane to the question.

Cast rowversion to bigint

In my C# program I don't want to work with byte array, therefore I cast rowversion data type to bigint:
SELECT CAST([version] AS BIGINT) FROM [dbo].[mytable]
So I receive a number instead of byte array. Is this conversion always successful and are there any possible problems with it? If so, in which data type should I cast rowversion instead?
rowversion and bigint both take 8 bytes so casting seems possible. However, the difference is that bigint is a signed integer, while rowversion is not.
This is a max value of rowversion that will cast properly to max positive bigint number (9223372036854775807):
select cast(0x7FFFFFFFFFFFFFFF as bigint)
But starting from here, you'll be getting negative numbers:
select cast(0x8000000000000000 as bigint)
I didn't check if the latter cast throws an error in C#.
You problably won't reach more than 9223372036854775807 rows in your table, but still it's something you should know about, and I personally wouldn't recommend doing this unless you are certain that this problem will never occur in your solution.
You can convert in C# also, but if you want to compare them you should be aware that rowversion is apparently stored big-endian, so you need to do something like:
byte[] timestampByteArray = ... // from datareader/linq2sql etc...
var timestampInt = BitConverter.ToInt64(timestampByteArray, 0);
timestampInt = IPAddress.NetworkToHostOrder(timestampInt);
It'd probably be more correct to convert it as ToUInt64, but then you'd have to write your own endian conversion as there's no overload on NetworkToHostOrder that takes uint64. Or just borrow one from Jon Skeet (search page for 'endian').

SQL Server : SUM() weird behaviour

I run this example in SQL Server Management Studio:
SELECT CONVERT(REAL, -2101.12) n INTO #t
SELECT * FROM #t
SELECT SUM(n) FROM #t
The first SELECT creates a temp table #t with 1 column n of type real, and it puts 1 row in it with the value -2101.12.
The second SELECT confirms that the table is created with the intended content and the result is:
n
---------
-2101.12
The third SELECT sums the only number that is there, but the result is:
-2101.1201171875
So the question is: Where the 0.0001171875 comes from?
EDIT: I know the lack of precision for the real and float data types, unfortunately I cannot change the database schema because of this. What surprise me though, is that I would expect to see also the extra decimals in the second select since it is supposed to be stored with that lack of precision. Since it does not happens on the second select, then why the sum function picks it up?
You've just discovered real (aka floating point) data is approximate.
Use decimal datatype instead.
The FLOAT and REAL data types are known as approximate data types. The behavior of FLOAT and REAL follows the IEEE 754 specification on approximate numeric data types.
Approximate numeric data types do not store the exact values specified for many numbers; They store an extremely close approximation of the value. For many applications, the tiny difference between the specified value and the stored approximation is not noticeable. At times, though, the difference becomes noticeable. Because of the approximate nature of the FLOAT and REAL data types, do not use these data types when exact numeric behavior is required, such as in financial applications, in operations involving rounding, or in equality checks. Instead, use the integer, decimal, money, or smallmoney data types.
Avoid using FLOAT or REAL columns in WHERE clause search conditions, especially with the = or <> operators. It is best to limit FLOAT and REAL columns with > or < comparisons.
Source of above statement

CAST as decimal, or *1.0?

Both of the following queries will give me the same result, and I've seen both techniques being used in the past to ensure a decimal data type is returned.
select CAST(5 as decimal(18,2))/2
select 5*1.0/2
Which method is best to use?
Why would people use *1.0 over casting? Is this just because its quicker to type?
If you want to control the precision or scale of the result, then you'll have to use CAST or CONVERT. But I believe that if you just want "a decimal", then as you suggested, it's just quicker to type * 1.0.

Resources