Append records of two columns - sql-server

How can I append records of two fields in single row.
Let's say, we have two columns in a table containing n number of records. I need to append each row having comma delimited in a single row.
Col1
Abs
Abd
Abf
Abg
Col2
10
15
20
0
Desired output
O/pcol
Abs:10 ;Abd:15 ;Abf:20 ;Abg:0
I hope this helps.

You can use an "accumulator" variable to concatenate all the values:
declare #testTable table (Col1 nvarchar(50),Col2 nvarchar(50))
declare #accumulator nvarchar(max)
insert into #testTable
select 'Abs',10
union all select 'Abd',15
union all select 'Abf',20
union all select 'Abg',0
set #accumulator =''
select #accumulator = #accumulator + Col1 + ':' + Col2 + ' ;' from #testTable
select #accumulator
The output of this snippet should be:
Abs:10 ;Abd:15 ;Abf:20 ;Abg:0 ;

Related

Calculate the row value based on different formula

I want to calculate the cell values based on the given formula in each row.
Input record is:
Expected ouput is:
First thing is your formula is not correct. As I understand there is unique row for each sl. And to do the sum of col1 and col2 correct formula is (col1 + col2). So correct that thing first.
After that you may implement this using dynamic sql as
create table tab_sum ( id int, col1 int, col2 int, col varchar(max) )
insert into tab_sum ( id, col1, col2, col )
values ( 1, 3, 5, '(col1 + col2)' )
, ( 2, 4, 6, '(col1 + col2)' )
DECLARE #query NVARCHAR(MAX) = '';
WITH CTE_DistinctFormulas AS
(
SELECT DISTINCT col as Formula FROM tab_sum
)
SELECT #query = #query + '
UNION ALL
SELECT id, col1, col2, col, ' + Formula + ' AS CalcValue FROM tab_sum WHERE col = ''' + Formula + ''' '
FROM CTE_DistinctFormulas;
SET #query = STUFF(#query,1,12,'');
PRINT #query;
EXEC (#query);
Or if editing formula is not an option for you then you may try this, But make sure you have only one row for each id and its corresponding values otherwise it will sum all of the similar rows in one.
create table tab_sum ( id int, col1 int, col2 int, col varchar(max) )
insert into tab_sum ( id, col1, col2, col )
values ( 1, 3, 5, 'sum(col1 + col2)' )
, ( 2, 4, 6, 'sum(col1 + col2)' )
DECLARE #query NVARCHAR(MAX) = '';
WITH CTE_DistinctFormulas AS
(
SELECT DISTINCT col as Formula FROM tab_sum
)
SELECT #query = #query + '
UNION ALL
SELECT id, col1, col2, col, ' + Formula + ' AS CalcValue FROM tab_sum WHERE col = ''' + Formula + ''' group by id, col1, col2, col '
FROM CTE_DistinctFormulas;
SET #query = STUFF(#query,1,12,'');
PRINT #query;
EXEC (#query);
I think you want something like this:
select *,case when row_cal like '%1%' and row_cal like '%2%' then value1 + value2
when row_cal like '%1%' and row_cal like '%3%' then value1 + value3
end as result from YourTable
Did you have a look at the case/ when expression? See MS doc: CASE
So your query would be somthing like:
SELECT sl, audience_1, audience_2, audience_3,
CASE WHEN audience_3 > 0 THEN audience_1 + audience_3
ELSE audience_1 + audience_2
END audience_total
FROM audience
;
You did not provide the criteria, so I had to guess :-)
(I did not run the query since I had no server handy :-) )
UPDATE after comment by OP:
If you have 71 different columns for audience in the same table you have a differnt issue imho.
Then my solution would be:
split out the audience number to a detail table with sl,
audience_type and audience_count
create a audience_calc_config table with two columns calc_method and audience_type
for each different calculation and audience type add an entry to the above
table
remove formula from original table and replace with appropriate
calc_method
Then run a simple select with a JOIN, GROUP BY sl and SUM(audience)
(Of course it would be even nicer to split the audience_calc to a
master/ detail, so you can have a foreign key...)

How to transform first row as column name?

I hope you are all well.
I would like your help on a data transformation task that I have.
I would like to convert the first row of a table to a column name
I am working on SQL Server Azure and I get daily data from another service.
This service loads a table that is of the same form.
and I would like to transform the data in the same manner
Do You have any idea how to do it ?
The way to solve this is by using a little dynamic SQL magic:
First, create and populate sample table (Please save us thus step in your future questions):
DECLARE #T As Table
(
Row_num int,
Line nvarchar(4000)
);
INSERT INTO #T (Row_Num, Line) VALUES
(1, 'Col1;Col2;Col3'),
(2, 'Val1;Val2;Val3'),
(3, 'Value1;Value2;Value1'),
(4, 'Val A; val B;Val A'),
(5, 'Value A; Value B;Value C');
Then, build a union all query that selects the values from every row but the first, replacing the semicolon (;) separator with a comma (,) surrounded by apostrophes ('). Add an apostrophe before and after the string (which means we are treating all the data as strings):
DECLARE #Sql nvarchar(max) = '';
SELECT #Sql += 'UNION ALL SELECT '''+ REPLACE(Line, ';', ''',''') + ''' '
FROM #T
WHERE Row_Num > 1;
Next, use stuff to replace the first UNION ALL with a common table expression declaration, specifying the column names in the declaration itself. Note that here we don't need the apostrophes anymore, just to replace the semicolon with a comma:
SELECT #Sql = STUFF(#Sql, 1, 10, 'WITH CTE('+ REPLACE(Line, ';', ',') +') AS (') + ') SELECT * FROM CTE'
FROM #T
WHERE Row_Num = 1;
Finally, execute the sql:
EXEC(#Sql)
Results:
Col1 Col2 Col3
Val1 Val2 Val3
Value1 Value2 Value1
Val A val B Val A
Value A Value B Value C
You can see a live demo on rextester.
Another possible approach is to transform your text data into valid JSON arrays and then use OPENJSON() with an explicit schema and dynamic statement.
Working example:
Input:
CREATE TABLE #Data (
RowNum int,
Line nvarchar(max)
)
INSERT INTO #Data
(RowNum, Line)
VALUES
(1, 'ColumnA;ColumnB;ColumnC'),
(2, 'ValueA1;ValueB1;ValueC1'),
(3, 'ValueA2;ValueB2;ValueC2'),
(4, 'ValueA3;ValueB3;ValueC3'),
(5, 'ValueA4;ValueB4;ValueC4'),
(6, 'ValueA5;ValueB5;ValueC5')
T-SQL:
-- Explicit schema generation
DECLARE #schema nvarchar(max)
SELECT #schema = STUFF((
SELECT CONCAT(N',', j.[value], N' nvarchar(max) ''$[', j.[key], N']''')
FROM #Data d
CROSS APPLY OPENJSON(CONCAT(N'["', REPLACE(d.Line, ';', '","'), N'"]')) j
WHERE d.RowNum = 1
FOR XML PATH('')
), 1, 1, N'')
-- Dymanic statement
DECLARE #stm nvarchar(max)
SET #stm = CONCAT(
N'SELECT j.* FROM #Data d ',
N'CROSS APPLY OPENJSON(CONCAT(N''[["'', REPLACE(d.Line, '';'', ''","''), N''"]]'')) ',
N'WITH (',
#schema,
N') j WHERE d.RowNum > 1'
)
-- Execution
EXEC sp_executesql #stm
Output:
-----------------------
ColumnA ColumnB ColumnC
-----------------------
ValueA1 ValueB1 ValueC1
ValueA2 ValueB2 ValueC2
ValueA3 ValueB3 ValueC3
ValueA4 ValueB4 ValueC4
ValueA5 ValueB5 ValueC5
Explanations:
The main part is to transform each row's data into valid JSON arrays. The count of the columns can be different.
Data from the first row will be used for explicit schema generation and values ColumnA;ColumnB;ColumnC are transformed into ["ColumnA","ColumnB","ColumnC"]. Values from subsequent rows ValueA1;ValueB1;ValueC1 are transformed into [["ValueA1","ValueB1","ValueC1"]].
Next simple examples demonstrate how OPENJSON() returns data with default and explicit schema:
With default schema:
DECLARE #json nvarchar(max)
SET #json = '["ValueA1", "ValueB1", "ValueC1"]'
SELECT *
FROM OPENJSON(#json)
Output for default schema:
----------------
key value type
----------------
0 ValueA1 1
1 ValueB1 1
2 ValueC1 1
With explicit schema:
SET #json = '[["ValueA1", "ValueB1", "ValueC1"]]'
SELECT *
FROM OPENJSON(#json)
WITH (
ColumnA nvarchar(max) '$[0]',
ColumnB nvarchar(max) '$[1]',
ColumnC nvarchar(max) '$[2]'
)
Output for explicit schema:
-----------------------
ColumnA ColumnB ColumnC
-----------------------
ValueA1 ValueB1 ValueC1

Multiple tables in the where clause SQL

I have 2 tables:-
Table_1
GetID UnitID
1 1,2,3
2 4,5
3 5,6
4 6
Table_2
ID UnitID UserID
1 1 1
1 2 1
1 3 1
1 4 1
1 5 2
1 6 3
I want the 'GetID' based on 'UserID'.
Let me explain you with an example.
For e.g.
I want all the GetID where UserID is 1.
The result set should be 1 and 2. 2 is included because one of the Units of 2 has UserID 1.
I want all the GetID where UserID is 2
The result set should be 2 and 3. 2 is included because one of Units of 2 has UserID 2.
I want to achieve this.
Thank you in Advance.
You can try a query like this:
See live demo
select
distinct userid,getid
from Table_1 t1
join Table_2 t2
on t1.unitId+',' like '%' +cast(t2.unitid as varchar(max))+',%'
and t2.userid=1
The query for this will be relatively ugly, because you made the mistake of storing CSV data in the UnitID column (or maybe someone else did and you are stuck with it).
SELECT DISTINCT
t1.GetID
FROM Table_1 t1
INNER JOIN Table_2 t2
ON ',' + t1.UnitID + ',' LIKE '%,' + CONVERT(varchar(10), t2.UnitID) + ',%'
WHERE
t2.UserID = 1;
Demo
To understand the join trick being used here, for the first row of Table_1 we are comparing ,1,2,3, against other single UnitID values from Table_2, e.g. %,1,%. Hopefully it is clear that my logic would match a single UnitID value in the CSV string in any position, including the first and last.
But a much better long term approach would be to separate those CSV values across separate records. Then, in addition to requiring a much simpler query, you could take advantage of things like indices.
try this:
declare #Table_1 table(GetID INT, UnitId VARCHAR(10))
declare #Table_2 table(ID INT, UnitId INT,UserId INT)
INSERT INTO #Table_1
SELECT 1,'1,2,3'
union
SELECT 2,'4,5'
union
SELECT 3,'5,6'
union
SELECT 4,'6'
INSERT INTO #Table_2
SELECT 1,1,1
union
SELECT 1,2,1
union
SELECT 1,3,1
union
SELECT 1,4,1
union
SELECT 1,5,2
union
SELECT 1,6,3
declare #UserId INT = 2
DECLARE #UnitId VARCHAR(10)
SELECT #UnitId=COALESCE(#UnitId + ',', '') + CAST(UnitId AS VARCHAR(5)) from #Table_2 WHERE UserId=#UserId
select distinct t.GetId
from #Table_1 t
CROSS APPLY [dbo].[Split](UnitId,',') AS AA
CROSS APPLY [dbo].[Split](#UnitId,',') AS BB
WHERE AA.Value=BB.Value
Split Function:
CREATE FUNCTION [dbo].Split(#input AS Varchar(4000) )
RETURNS
#Result TABLE(Value BIGINT)
AS
BEGIN
DECLARE #str VARCHAR(20)
DECLARE #ind Int
IF(#input is not null)
BEGIN
SET #ind = CharIndex(',',#input)
WHILE #ind > 0
BEGIN
SET #str = SUBSTRING(#input,1,#ind-1)
SET #input = SUBSTRING(#input,#ind+1,LEN(#input)-#ind)
INSERT INTO #Result values (#str)
SET #ind = CharIndex(',',#input)
END
SET #str = #input
INSERT INTO #Result values (#str)
END
RETURN
END

Spliting string in Sql and storing in temp table

I am facing a problem since 3 days, Actually, I am working on SSRS where stored procedure expects multiple rows of parameters i.e splitting rows as ';' and column as ','. So actually I want to split the column with ',' and ';' as rows and will further insert into the temp table. Any help will be really appreciated.
EG: data coming in stored procedure as a parameter from app:
101,1,1,1,5;
102,1,1,1,4;
103,1,1,1,3;
accepted as Varchar in stored procedure.
Now wants to split the data as ';' once and then with ',' for every row.
If i correctly understood the question, data is required in rows and columns based on string,split the column with ',' and ';' as rows
DECLARE #PARAM_STRING VARCHAR(100)='101,1,1,1,5; 102,1,1,1,4; 103,1,1,1,3;11,11,11,11,11;12,12,12,12,12;'
DECLARE #DYNAMIC_QUERY VARCHAR(MAX)
DECLARE #TABLE TABLE(ID INT,DATA VARCHAR(MAX))
INSERT INTO #TABLE
SELECT 1 ID, 'SELECT '+DATA FROM (
SELECT A.B.value('.','VARCHAR(50)')DATA FROM
(SELECT CAST('<A>'+REPLACE(#PARAM_STRING,';','</A><A>')+'</A>' AS XML)COL)T
CROSS APPLY T.COL.nodes('/A') AS A(B))F WHERE DATA<>''
SELECT #DYNAMIC_QUERY=STUFF((SELECT ' UNION ' + CAST(DATA AS VARCHAR(MAX)) [text()]FROM #TABLE WHERE ID = t.ID
FOR XML PATH(''), TYPE).value('.','NVARCHAR(MAX)'),1,7,' ')
FROM #TABLE t GROUP BY ID
EXECUTE(#DYNAMIC_QUERY)
Result :
(No column name) (No column name) (No column name) (No column name) (No column name)
11 11 11 11 11
12 12 12 12 12
101 1 1 1 5
102 1 1 1 4
103 1 1 1 3
there are many approaches to do this using xml, cte or loop. below is with xml,
DECLARE #ParamStr VARCHAR(500) = '101,1,1,1,5; 102,1,1,1,4; 103,1,1,1,3;'
DECLARE #x XML
SELECT #x = CAST('<R><SemiCol><Comma>'+ REPLACE(REPLACE(#ParamStr,';','</Comma></SemiCol><SemiCol><Comma>'),',','</Comma><Comma>')+ '</Comma></SemiCol></R>' AS XML)
SELECT t.value('.', 'int') AS inVal
FROM #x.nodes('R/SemiCol/Comma') AS x(t)
WHERE LEN(t.value('.', 'varchar(10)')) > 0

Find non-ASCII characters in varchar columns using SQL Server

How can rows with non-ASCII characters be returned using SQL Server?
If you can show how to do it for one column would be great.
I am doing something like this now, but it is not working
select *
from Staging.APARMRE1 as ar
where ar.Line like '%[^!-~ ]%'
For extra credit, if it can span all varchar columns in a table, that would be outstanding! In this solution, it would be nice to return three columns:
The identity field for that record. (This will allow the whole record to be reviewed with another query.)
The column name
The text with the invalid character
Id | FieldName | InvalidText |
----+-----------+-------------------+
25 | LastName | Solís |
56 | FirstName | François |
100 | Address1 | 123 Ümlaut street |
Invalid characters would be any outside the range of SPACE (3210) through ~ (12710)
Here is a solution for the single column search using PATINDEX.
It also displays the StartPosition, InvalidCharacter and ASCII code.
select line,
patindex('%[^ !-~]%' COLLATE Latin1_General_BIN,Line) as [Position],
substring(line,patindex('%[^ !-~]%' COLLATE Latin1_General_BIN,Line),1) as [InvalidCharacter],
ascii(substring(line,patindex('%[^ !-~]%' COLLATE Latin1_General_BIN,Line),1)) as [ASCIICode]
from staging.APARMRE1
where patindex('%[^ !-~]%' COLLATE Latin1_General_BIN,Line) >0
I've been running this bit of code with success
declare #UnicodeData table (
data nvarchar(500)
)
insert into
#UnicodeData
values
(N'Horse�')
,(N'Dog')
,(N'Cat')
select
data
from
#UnicodeData
where
data collate LATIN1_GENERAL_BIN != cast(data as varchar(max))
Which works well for known columns.
For extra credit, I wrote this quick script to search all nvarchar columns in a given table for Unicode characters.
declare
#sql varchar(max) = ''
,#table sysname = 'mytable' -- enter your table here
;with ColumnData as (
select
RowId = row_number() over (order by c.COLUMN_NAME)
,c.COLUMN_NAME
,ColumnName = '[' + c.COLUMN_NAME + ']'
,TableName = '[' + c.TABLE_SCHEMA + '].[' + c.TABLE_NAME + ']'
from
INFORMATION_SCHEMA.COLUMNS c
where
c.DATA_TYPE = 'nvarchar'
and c.TABLE_NAME = #table
)
select
#sql = #sql + 'select FieldName = ''' + c.ColumnName + ''', InvalidCharacter = [' + c.COLUMN_NAME + '] from ' + c.TableName + ' where ' + c.ColumnName + ' collate LATIN1_GENERAL_BIN != cast(' + c.ColumnName + ' as varchar(max)) ' + case when c.RowId <> (select max(RowId) from ColumnData) then ' union all ' else '' end + char(13)
from
ColumnData c
-- check
-- print #sql
exec (#sql)
I'm not a fan of dynamic SQL but it does have its uses for exploratory queries like this.
try something like this:
DECLARE #YourTable table (PK int, col1 varchar(20), col2 varchar(20), col3 varchar(20));
INSERT #YourTable VALUES (1, 'ok','ok','ok');
INSERT #YourTable VALUES (2, 'BA'+char(182)+'D','ok','ok');
INSERT #YourTable VALUES (3, 'ok',char(182)+'BAD','ok');
INSERT #YourTable VALUES (4, 'ok','ok','B'+char(182)+'AD');
INSERT #YourTable VALUES (5, char(182)+'BAD','ok',char(182)+'BAD');
INSERT #YourTable VALUES (6, 'BAD'+char(182),'B'+char(182)+'AD','BAD'+char(182)+char(182)+char(182));
--if you have a Numbers table use that, other wise make one using a CTE
WITH AllNumbers AS
( SELECT 1 AS Number
UNION ALL
SELECT Number+1
FROM AllNumbers
WHERE Number<1000
)
SELECT
pk, 'Col1' BadValueColumn, CONVERT(varchar(20),col1) AS BadValue --make the XYZ in convert(varchar(XYZ), ...) the largest value of col1, col2, col3
FROM #YourTable y
INNER JOIN AllNumbers n ON n.Number <= LEN(y.col1)
WHERE ASCII(SUBSTRING(y.col1, n.Number, 1))<32 OR ASCII(SUBSTRING(y.col1, n.Number, 1))>127
UNION
SELECT
pk, 'Col2' BadValueColumn, CONVERT(varchar(20),col2) AS BadValue --make the XYZ in convert(varchar(XYZ), ...) the largest value of col1, col2, col3
FROM #YourTable y
INNER JOIN AllNumbers n ON n.Number <= LEN(y.col2)
WHERE ASCII(SUBSTRING(y.col2, n.Number, 1))<32 OR ASCII(SUBSTRING(y.col2, n.Number, 1))>127
UNION
SELECT
pk, 'Col3' BadValueColumn, CONVERT(varchar(20),col3) AS BadValue --make the XYZ in convert(varchar(XYZ), ...) the largest value of col1, col2, col3
FROM #YourTable y
INNER JOIN AllNumbers n ON n.Number <= LEN(y.col3)
WHERE ASCII(SUBSTRING(y.col3, n.Number, 1))<32 OR ASCII(SUBSTRING(y.col3, n.Number, 1))>127
order by 1
OPTION (MAXRECURSION 1000);
OUTPUT:
pk BadValueColumn BadValue
----------- -------------- --------------------
2 Col1 BA¶D
3 Col2 ¶BAD
4 Col3 B¶AD
5 Col1 ¶BAD
5 Col3 ¶BAD
6 Col1 BAD¶
6 Col2 B¶AD
6 Col3 BAD¶¶¶
(8 row(s) affected)
This script searches for non-ascii characters in one column. It generates a string of all valid characters, here code point 32 to 127. Then it searches for rows that don't match the list:
declare #str varchar(128);
declare #i int;
set #str = '';
set #i = 32;
while #i <= 127
begin
set #str = #str + '|' + char(#i);
set #i = #i + 1;
end;
select col1
from YourTable
where col1 like '%[^' + #str + ']%' escape '|';
running the various solutions on some real world data - 12M rows varchar length ~30, around 9k dodgy rows, no full text index in play, the patIndex solution is the fastest, and it also selects the most rows.
(pre-ran km. to set the cache to a known state, ran the 3 processes, and finally ran km again - the last 2 runs of km gave times within 2 seconds)
patindex solution by Gerhard Weiss -- Runtime 0:38, returns 9144 rows
select dodgyColumn from myTable fcc
WHERE patindex('%[^ !-~]%' COLLATE Latin1_General_BIN,dodgyColumn ) >0
the substring-numbers solution by MT. -- Runtime 1:16, returned 8996 rows
select dodgyColumn from myTable fcc
INNER JOIN dbo.Numbers32k dn ON dn.number<(len(fcc.dodgyColumn ))
WHERE ASCII(SUBSTRING(fcc.dodgyColumn , dn.Number, 1))<32
OR ASCII(SUBSTRING(fcc.dodgyColumn , dn.Number, 1))>127
udf solution by Deon Robertson -- Runtime 3:47, returns 7316 rows
select dodgyColumn
from myTable
where dbo.udf_test_ContainsNonASCIIChars(dodgyColumn , 1) = 1
There is a user defined function available on the web 'Parse Alphanumeric'. Google UDF parse alphanumeric and you should find the code for it. This user defined function removes all characters that doesn't fit between 0-9, a-z, and A-Z.
Select * from Staging.APARMRE1 ar
where udf_parsealpha(ar.last_name) <> ar.last_name
That should bring back any records that have a last_name with invalid chars for you...though your bonus points question is a bit more of a challenge, but I think a case statement could handle it. This is a bit psuedo code, I'm not entirely sure if it'd work.
Select id, case when udf_parsealpha(ar.last_name) <> ar.last_name then 'last name'
when udf_parsealpha(ar.first_name) <> ar.first_name then 'first name'
when udf_parsealpha(ar.Address1) <> ar.last_name then 'Address1'
end,
case when udf_parsealpha(ar.last_name) <> ar.last_name then ar.last_name
when udf_parsealpha(ar.first_name) <> ar.first_name then ar.first_name
when udf_parsealpha(ar.Address1) <> ar.last_name then ar.Address1
end
from Staging.APARMRE1 ar
where udf_parsealpha(ar.last_name) <> ar.last_name or
udf_parsealpha(ar.first_name) <> ar.first_name or
udf_parsealpha(ar.Address1) <> ar.last_name
I wrote this in the forum post box...so I'm not quite sure if that'll function as is, but it should be close. I'm not quite sure how it will behave if a single record has two fields with invalid chars either.
As an alternative, you should be able to change the from clause away from a single table and into a subquery that looks something like:
select id,fieldname,value from (
Select id,'last_name' as 'fieldname', last_name as 'value'
from Staging.APARMRE1 ar
Union
Select id,'first_name' as 'fieldname', first_name as 'value'
from Staging.APARMRE1 ar
---(and repeat unions for each field)
)
where udf_parsealpha(value) <> value
Benefit here is for every column you'll only need to extend the union statement here, while you need to put that comparisson three times for every column in the case statement version of this script
To find which field has invalid characters:
SELECT * FROM Staging.APARMRE1 FOR XML AUTO, TYPE
You can test it with this query:
SELECT top 1 'char 31: '+char(31)+' (hex 0x1F)' field
from sysobjects
FOR XML AUTO, TYPE
The result will be:
Msg 6841, Level 16, State 1, Line 3 FOR XML could not serialize the
data for node 'field' because it contains a character (0x001F) which
is not allowed in XML. To retrieve this data using FOR XML, convert it
to binary, varbinary or image data type and use the BINARY BASE64
directive.
It is very useful when you write xml files and get error of invalid characters when validate it.
Here is a UDF I built to detectc columns with extended ascii charaters. It is quick and you can extended the character set you want to check. The second parameter allows you to switch between checking anything outside the standard character set or allowing an extended set:
create function [dbo].[udf_ContainsNonASCIIChars]
(
#string nvarchar(4000),
#checkExtendedCharset bit
)
returns bit
as
begin
declare #pos int = 0;
declare #char varchar(1);
declare #return bit = 0;
while #pos < len(#string)
begin
select #char = substring(#string, #pos, 1)
if ascii(#char) < 32 or ascii(#char) > 126
begin
if #checkExtendedCharset = 1
begin
if ascii(#char) not in (9,124,130,138,142,146,150,154,158,160,170,176,180,181,183,184,185,186,192,193,194,195,196,197,199,200,201,202,203,204,205,206,207,209,210,211,212,213,214,216,217,218,219,220,221,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240,241,242,243,244,245,246,248,249,250,251,252,253,254,255)
begin
select #return = 1;
select #pos = (len(#string) + 1)
end
else
begin
select #pos = #pos + 1
end
end
else
begin
select #return = 1;
select #pos = (len(#string) + 1)
end
end
else
begin
select #pos = #pos + 1
end
end
return #return;
end
USAGE:
select Address1
from PropertyFile_English
where udf_ContainsNonASCIIChars(Address1, 1) = 1

Resources