SQL Server Padding 0's with RIGHT Not Working - sql-server

I am using SQL Server 2008.
I have a varchar(30) column which holds check numbers. If the payment was made using a credit card, checknbr will be a zero-length string. I need to pad this data with leading zeroes such that the entire output is ten digits long. If there is no check number, it should just be ten zeroes.
Here is the data I would like to see:
0000000000
0000000000
0000000114
0000000105
0000000007
Here is the query I have been using and am stuck on:
SELECT RIGHT((CASE
WHEN LEN(checknbr) = 0 THEN '0000000000'
ELSE '0000000000'+checknbr
END),10) as checknbr
FROM payhistory
WHERE number = 12345678
And this is what I get:
0000000000
0000000000
114
105
7
Trying another way:
SELECT RIGHT('0000000000'+checknbr,10)
FROM payhistory
WHERE number = 3861821
And I get the same results. Oddly enough, I have a varchar(10) column where
RIGHT('0000'+EDC.DatabaseNumber,4)
yields exactly the results I am after. What am I missing here? Something about a quirk with the RIGHT function?
Thanks in advance

If you are using SQL server 2012 or up try the following:
SELECT RIGHT(CONCAT('0000000000', checknbr), 10)
Concat will automatically convert NULL values to empty strings, removing the need for extra checks.
By the way, the issue you're having is the fact that your query still sees checknbr as a number instead of a varchar value. Casting it to varchar will solve that for you.
EDIT SQL Server 2008:
Since you have a VARCHAR column with no NULL values the only thing that still comes to mind are trailing spaces. Try:
SELECT RIGHT('0000000000' + RTRIM(checknbr), 10);

using cast() or convert() to convert checknbr to a character value will solve the implicit conversion of '0000000000' to the integer 0.
select right('0000000000'+convert(varchar(10),checknbr),10)
from payhistory
where number = 3861821
If checknbr can be null and you want it to return as '0000000000', then you can wrap it in coalesce() or isnull():
select right('0000000000'+isnull(convert(varchar(10),checknbr),''),10)
from payhistory
where number = 3861821
If checknbr is not a number, but varchar(30), then (as Jens points out), you may have padded spaces that need to be trimmed:
rextester demo: http://rextester.com/IRLV83801
create table payhistory (checknbr varchar(30));
insert into payhistory values ('0'), ('1'), (' 0'), ('0 '),(' ');
select checknbr = right('0000000000'+checknbr,10)
from payhistory
returns:
+------------+
| checknbr |
+------------+
| 0000000000 |
| 0000000001 |
| 0 |
| 0 |
| |
+------------+
And if you trim the padded spaces:
select checknbr = right('0000000000'+ltrim(rtrim(checknbr)),10)
from payhistory
returns:
+------------+
| checknbr |
+------------+
| 0000000000 |
| 0000000001 |
| 0000000000 |
| 0000000000 |
| 0000000000 |
+------------+

Related

TRY_CONVERT in CASE WHEN

I have SQL code like this:
SELECT
CASE WHEN TRY_CONVERT(int, strCol) IS NULL
THEN strCol
ELSE CONVERT(VARCHAR, CONVERT(int, strCol))
END
Table is as follow:
| strCol |
|--------|
| 000373 |
| 2AB38 |
| C2039 |
| ABC21 |
| 32BC |
I wish to drop all the leading 0s in rows with pure number
| strCol |
|--------|
| 373 |
| 2AB38 |
| C2039 |
| ABC21 |
| 32BC |
But I got the following error:
Conversion failed when converting the varchar value '2AB38' to data type int.
I don't quite understand, it should not even enter the second case branch isn't it?
Yet another option is try_convert in concert with a coalesce
Example
Declare #YourTable Table ([strCol] varchar(50)) Insert Into #YourTable Values
('000373')
,('2AB38')
,('C2039')
,('ABC21')
,('32BC')
Select *
,NewVal = coalesce(left(try_convert(int,strCol),10),strCol)
From #YourTable
Returns
strCol NewVal
000373 373
2AB38 2AB38
C2039 C2039
ABC21 ABC21
32BC 32BC
Thank you so much #Dale K and #Programnik
CASE WHEN ISNUMERIC(strCol) = 0
THEN strCol
ELSE TRY_CONVERT(VARCHAR, TRY_CONVERT(int, strCol))
END AS strCol
These is the piece of code got the work done.
All branches are evaluated, there are no guarantee it will short circuit #Dale K
Can use ISNUMERIC in SQL SERVER #Programnik

Splitting up one row/column into one or mulitple rows over two columns

First post here! I'm trying to update a stored procedure in my employer's Data Warehouse that's linking two tables on their ID's. The stored procedure is based on 2 columns in Table A. It's primary key, and a column that contains the primary keys from Table B and it's domain in one column. Note that it physically only needs Table A since the ID's from B are in there. The old code used some PATINDEX/SUBSTRING code that assumes two things:
The FK's are always 7 characters long
Domain strings look like this "#xx-yyyy" where xx has to be two characters and yyyy four.
The problem however:
We've recently outgrown the 7-digit FK's and are now looking at 7 or 8 digits
Longer domain strings are implemented (where xx may be between 2 or 15 characters)
Sometimes there is no domain string. Just some FK's, delimited the same way.
The code is poorly documented and includes some ID exceptions (not a problem, just annoying)
Some info:
The Data Warehouse follows the Data Vault method and this procedure is stored on SQL Server and is triggered by SSIS. Subsequent to this procedure the HUB and Satellites are updated so in short: I can't just create a new stored procedure but instead will try to integrate my code into the old stored procedure.
The servers is running on SQL Server 2012 so I can't use string_split
This platform is dying out so I just have to "keep it running" for this year.
An ID and domain are always seperated with one space
If a record has no foreign keys it will always have an empty string
When a record has multiple (foreign) ID's it will always use the same delimiting, even when the individual FK's have no domain string next to it. Delimiter looks like this:
"12345678 #xx-xxxx[CR][CR][CR][LF]12345679 #yy-xxxx"
I've managed to create some code that will assign row numbers and is flexible in recognising the amount of FK's.
This is a piece of the old code:
DECLARE
#MAXCNT INT = (SELECT MAX(ROW) FROM #Worktable),
#C_ID INT,
#R_ID INT,
#SOURCE CHAR(5),
#STRING VARCHAR(20),
#VALUE CHAR(20),
#LEN INT,
#STARTSTRINGLEN INT =0,
#MAXSTRINGLEN INT,
#CNT INT = 1
WHILE #CNT <= #MAXCNT
BEGIN
SELECT #LEN=LEN(REQUESTS),#STRING =REQUESTS, #C_ID =C_ID FROM #Worktable WHERE ROW = #CNT
--1 REQUEST RELATED TO ONE CHANGE
IF #LEN < 17
BEGIN
INSERT INTO #ChangeRequest
SELECT #C_ID,SUBSTRING(#STRING,0,CASE WHEN PATINDEX('%-xxxx%',#STRING) = 0 THEN #LEN+1 ELSE PATINDEX('%-xxxx%',#STRING)-4 END)
--SELECT #STRING AS STRING, #LEN AS LENGTH
END
ELSE
-- MULTIPLE REQUESTS RELATED TO ONE CHANGE
SET #STARTSTRINGLEN = 0
WHILE #STARTSTRINGLEN<#LEN
BEGIN
SET #MAXSTRINGLEN = (SELECT PATINDEX('%-xxxx%',SUBSTRING(#STRING,#STARTSTRINGLEN,#STARTSTRINGLEN+17)))+7
INSERT INTO #ChangeRequest
--remove CRLF
SELECT #C_ID,
REPLACE(REPLACE(
substring(#string,#STARTSTRINGLEN+1,#MAXSTRINGLEN )
, CHAR(13), ''), CHAR(10), '')
SET #STARTSTRINGLEN=#STARTSTRINGLEN+#MAXSTRINGLEN
IF #MAXSTRINGLEN = 0 BEGIN SET #STARTSTRINGLEN = #len END
END
SET #CNT = #CNT + 1;
END;
Since this loop is assuming fixed lengths I need to make it more flexible. My code:
(CASE WHEN LEN([Requests]) = 0
THEN 0
ELSE (LEN(REPLACE(REPLACE(Requests,CHAR(10),'|'),CHAR(13),''))-LEN(REPLACE(REPLACE(Requests,CHAR(10),''),CHAR(13),'')))+1
END)
This consistently shows the accurate number of FK's and thus the number of rows to be created. Now I need to create a loop in which to physically create these rows and split the FK and domain into two columns.
Source table:
+---------+----------------------------------------------------------------------------+
| Some ID | Other ID's |
+---------+----------------------------------------------------------------------------+
| 1 | 21 |
| 2 | 31 #xxx-xxx |
| 3 | 41 #xxx-xxx[CR][CR][CR][LF]42 #yyy-xxx[CR][CR][CR][LF]43 #zzz-xxx |
| 4 | 51[CR][CR][CR][LF]52[CR][CR][CR][LF]53 #xxx-xxx[CR][CR][CR][LF]54 #yyy-xxx |
| 5 | <empty string> |
+---------+----------------------------------------------------------------------------+
Target table:
+-----+----------------+----------------+
| SID | OID | Domain |
+-----+----------------+----------------+
| 1 | 21 | <empty string> |
| 2 | 31 | xxx-xxx |
| 3 | 41 | xxx-xxx |
| 3 | 42 | yyy-xxx |
| 3 | 43 | zzz-xxx |
| 4 | 51 | <empty string> |
| 4 | 52 | <empty string> |
| 4 | 53 | xxx-xxx |
| 4 | 54 | yyy-xxx |
| 5 | <empty string> | <empty string> |
+-----+----------------+----------------+
Currently all rows are created but every one beyond the first for each SID is empty.

Microsoft SQL Server max of character and numeric values in same column

I am taking the max of a column that contains both numeric and varchar values (i.e. '2008', 'n/a'). What is considered the max? The string or numeric value?
I am working in Microsoft SQL Server.
Thanks!
The Numeric value is actually a string.
MAX finds the highest value in the collating sequence
For collating sequence of ASCII chars refer below link.
https://www.ibm.com/support/knowledgecenter/SSQ2R2_9.5.1/com.ibm.ent.cbl.zos.doc/PGandLR/ref/rlebcasc.html
For character columns, MAX finds the highest value in the collating sequence.
- max() docs
It will be the same value as if you order by col desc
Here are some values thrown into a column and sorted descending:
+------------+
| col |
+------------+
| Z |
| na |
| n/a/ |
| 9999999999 |
| 30 |
| 2008 |
| 00000000 |
+------------+
The max() would be the first value from the above. Z.
rextester demo: http://rextester.com/IXXX76837
The exact order will depend on your the collation of your column (most default to Latin1_General_CI_AS).
Here is a demo that shows you the sort order for each character for some different collations (latin general / latin binary)
rextester demo: http://rextester.com/WLJ38844
create table one
(
Col1 varchar(200)
)
insert into one(Col1)
values('2008'),('n/a'),('aaaa'),('bbb'),('zzzz')
select max(Col1) from one
--zzzz
n/a would be max
It is better to eliminate the known 'n/a' values and cast them to some thing meaningful while applying the max or min functions.
select max(cast(column as int))
from tablename
where column != 'n/a'

TSQL - how to avoid truncating or rounding

For some reason, the data in the ORIGINAL_BOOK column (even though it has 2 decimals (eg. 876.76)), the statement below truncates the decimals. I want the decimals to be visible as well. Can someone suggest how to fix this issue please?
Case
When [DN].[SETTLEMENT_DATE] > [DN].[AS_OF_DATE]
Then Cast([DN].[ORIGINAL_BOOK] as decimal(28, 2))
Else Cast([DN].[CURRENT_BOOK] as decimal(28, 2))
End
I can't be sure because you only say that the type of the fields involved is NUMERIC, without specifying any precision or scale, however if your source fields really are just NUMERIC type, SQL defaults to NUMERIC(18,0) (as per MSDN documentation here) and so you will only be able to store values with a scale of zero (i.e. no value after the decimal place) and any values written to these fields with a greater scale (i.e. data after the decimal place) will be rounded accordingly:
CREATE TABLE dn (
ORIGINAL_BOOK NUMERIC,
CURRENT_BOOK NUMERIC
)
INSERT INTO dn
SELECT 876.76, 423.75
UNION
SELECT 0, 0
UNION
SELECT 1.1, 6.5
UNION
SELECT 12, 54
UNION
SELECT 5.789, 6.321
SELECT CAST(dn.ORIGINAL_BOOK AS DECIMAL(28,2)) AS ORIGINAL_BOOK_CONV,
CAST(dn.CURRENT_BOOK AS DECIMAL(28,2)) AS CURRENT_BOOK_CONV
FROM dn
DROP TABLE dn
gives results:
/----------------------------------------\
| ORIGINAL_BOOK_CONV | CURRENT_BOOK_CONV |
|--------------------+-------------------|
| 0.00 | 0.00 |
| 1.00 | 7.00 |
| 6.00 | 6.00 |
| 12.00 | 54.00 |
| 877.00 | 424.00 |
\----------------------------------------/
Increasing the scale of the field in the table will allow values with greater numbers of decimal places to be stored and your CAST call will then reduce the number of decimal places if appropriate:
CREATE TABLE dn (
ORIGINAL_BOOK NUMERIC(28,3),
CURRENT_BOOK NUMERIC(28,3)
)
INSERT INTO dn
SELECT 876.76, 423.75
UNION
SELECT 0, 0
UNION
SELECT 1.1, 6.5
UNION
SELECT 12, 54
UNION
SELECT 5.789, 6.321
SELECT CAST(dn.ORIGINAL_BOOK AS DECIMAL(28,2)) AS ORIGINAL_BOOK_CONV,
CAST(dn.CURRENT_BOOK AS DECIMAL(28,2)) AS CURRENT_BOOK_CONV
FROM dn
DROP TABLE dn
gives results:
/----------------------------------------\
| ORIGINAL_BOOK_CONV | CURRENT_BOOK_CONV |
|--------------------+-------------------|
| 0.00 | 0.00 |
| 1.10 | 6.50 |
| 5.79 | 6.32 |
| 12.00 | 54.00 |
| 876.76 | 423.75 |
\----------------------------------------/
If you are sure that your table fields are capable of containing numeric values to more than zero decimal places (i.e. scale > 0), please post the CREATE TABLE script for the table (you can get this from SSMS) or a screenshot of the Column listing so we can see the true type of the underlying fields. It would also be useful to see values SELECTed from the fields without any CASTing so we can see how the data is presented without any conversion.

SQL select all ID's into temp table according to comma delimited string

I have a table structured as such:
| ID | Name |
| 1 | Bob |
| 2 | Jim |
| 3 | Jane |
. .
. .
. .
I am trying to compose a query that will return all ID's of the names that I will be passing. Note that the names will be passed as a comma delimited string.
The query i've tried is:
#Names = 'Bob, Jane'
select ID into #Ids from Users where Name in ((select i.Item from dbo.Split(#Names, ',', 0) as i))
What I was hoping to get was:
| ID |
| 1 |
| 3 |
but instead I just get:
| ID |
| 1 |
I would have to loop through this query, but what is the best way to do so? Am I approaching this problem correctly?
The problem might be the trailing spaces in the output of split function.
After comma you have a space. It should be removed because in SQL Server only = operator ignores trailing spaces when making the comparison. Use Ltrim and Rtrim functions to remove the trailing spaces before making comparison
SELECT ID
INTO #Ids
FROM Users
WHERE NAME IN ((SELECT rtrim(ltrim(i.Item))
FROM dbo.Split(#Names, ',', 0) AS i))

Resources