Need help removing functions from CASE WHEN - loops

I have a situation where I have created script to select data in our company's environment. In doing so, I decided to use functions for some pattern matching and stripping of characters in a CASE WHEN.
However, one of our clients doesn't want to let us put their data in our local environment, so I now have the requirement of massaging the script to be able to run on their environment--essentially meaning I need to remove the functions, and I am having trouble thinking about how I need to move stuff around to do so.
An example of the function call would be:
SELECT ....
CASE WHEN Prp = 'Key Cabinet'
AND SerialNumber IS NOT NULL
AND dbo.fnRemoveNonNumericCharacters(SerialNumber) <> ''
THEN dbo.fnRemoveNonNumericCharacters(SerialNumber)
....
INTO #EmpProperty
FROM ....
Where Prp is a column that contains the property type and SerialNumber is a column that contains a serial number, but also some other random garbage because data entry was sloppy.
The function definition is:
WHILE PATINDEX('%[^0-9]%', #strText) > 0
BEGIN
SET #strText = STUFF(#strText, PATINDEX('%[^0-9]%', #strText), 1, '')
END
RETURN #strText
where #strText is the SerialNumber I am passing in.
I may be stuck in analysis paralysis because I just can't figure out a good way to do this. I don't need a full on solution per-say, perhaps just point me in a direction you know will work. Let me know if you would like some sample DDL/DML to mess around with stuff.
Example 'SerialNumber' values: CA100 (Trash bins), T110, 101B.
There are also a bunch of other types of values such as all text or all numbers, but we are filtering those out. The current patterning matching is good enough.

So I think you mean you can't use a function... so, perhaps:
declare #table table (SomeCol varchar(4000))
insert into #table values
('1 ab2cdefghijk3lmnopqr4stuvwxyz5 6 !7##$8%^&9*()-10_=11+[]{}12\|;:13></14? 15'),
('CA100 (Trash bins), T110, 101B')
;with cte as (
select top (100)
N=row_number() over (order by ##spid) from sys.all_columns),
Final as (
select SomeCol, Col
from #table
cross apply (
select (select X + ''
from (select N, substring(SomeCol, N, 1) X
from cte
where N<=datalength(SomeCol)) [1]
where X between '0' and '9'
order by N
for xml path(''))
) Z (Col)
where Z.Col is not NULL
)
select
SomeCol
,cast(Col as varchar) CleanCol --change this to BIGINT if it isn't too large
from Final

Related

Searching for multiple patterns in a string in T-SQL

In t-sql my dilemma is that I have to parse a potentially long string (up to 500 characters) for any of over 230 possible values and remove them from the string for reporting purposes. These values are a column in another table and they're all upper case and 4 characters long with the exception of two that are 5 characters long.
Examples of these values are:
USFRI
PROME
AZCH
TXJS
NYDS
XVIV. . . . .
Example of string before:
"Offered to XVIV and USFRI as back ups. No response as of yet."
Example of string after:
"Offered to and as back ups. No response as of yet."
Pretty sure it will have to be a UDF but I'm unable to come up with anything other than stripping ALL the upper case characters out of the string with PATINDEX which is not the objective.
This is unavoidably cludgy but one way is to split your string into rows, once you have a set of words the rest is easy; Simply re-aggregate while ignoring the matching values*:
with t as (
select 'Offered to XVIV and USFRI as back ups. No response as of yet.' s
union select 'Another row AZCH and TXJS words.'
), v as (
select * from (values('USFRI'),('PROME'),('AZCH'),('TXJS'),('NYDS'),('XVIV'))v(v)
)
select t.s OriginalString, s.Removed
from t
cross apply (
select String_Agg(j.[value], ' ') within group(order by Convert(tinyint,j.[key])) Removed
from OpenJson(Concat('["',replace(s, ' ', '","'),'"]')) j
where not exists (select * from v where v.v = j.[value])
)s;
* Requires a fully-supported version of SQL Server.
build a function to do the cleaning of one sentence, then call that function from your query, something like this SELECT Col1, dbo.fn_ReplaceValue(Col1) AS cleanValue, * FROM MySentencesTable. Your fn_ReplaceValue will be something like the code below, you could also create the table variable outside the function and pass it as parameter to speed up the process, but this way is all self contained.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE FUNCTION fn_ReplaceValue(#sentence VARCHAR(500))
RETURNS VARCHAR(500)
AS
BEGIN
DECLARE #ResultVar VARCHAR(500)
DECLARE #allValues TABLE (rowID int, sValues VARCHAR(15))
DECLARE #id INT = 0
DECLARE #ReplaceVal VARCHAR(10)
DECLARE #numberOfValues INT = (SELECT COUNT(*) FROM MyValuesTable)
--Populate table variable with all values
INSERT #allValues
SELECT ROW_NUMBER() OVER(ORDER BY MyValuesCol) AS rowID, MyValuesCol
FROM MyValuesTable
SET #ResultVar = #sentence
WHILE (#id <= #numberOfValues)
BEGIN
SET #id = #id + 1
SET #ReplaceVal = (SELECT sValue FROM #allValues WHERE rowID = #id)
SET #ResultVar = REPLACE(#ResultVar, #ReplaceVal, SPACE(0))
END
RETURN #ResultVar
END
GO
I suggest creating a table (either temporary or permanent), and loading these 230 string values into this table. Then use it in the following delete:
DELETE
FROM yourTable
WHERE col IN (SELECT col FROM tempTable);
If you just want to view your data sans these values, then use:
SELECT *
FROM yourTable
WHERE col NOT IN (SELECT col FROM tempTable);

How can I specify a specific custom ordering with a SELECT DISTINCT

I've seen all the other answers on this but I haven't seen my specific problem addressed. Basically, I have a table with a bunch of duplicated values, and I need to select exactly one column (to concatenate into a list, ultimately, for output purposes), but with a specific item in that list FIRST. The problem I have is that I can specify the order, or I can specify distinct, but not both. And I CANNOT INCLUDE the order column in the SELECT because that output is going to be used directly, and having two columns in the output breaks everything. This seems like it should be possible, but I can't figure out how.
Here's a contrived example:
DECLARE #List TABLE ([Name] nvarchar(10));
INSERT INTO #List ([Name])
VALUES (N'A'), (N'A'), (N'B'), (N'B'), (N'B'), (N'C'), (N'D'), (N'D'), (N'J'), (N'X'), (N'X'), (N'Y');
-- Has both duplicates, and not in the right order
SELECT * FROM #List;
-- No duplicates, but not in the right order
SELECT DISTINCT * FROM #List;
-- In the right order, but has duplicates
SELECT * FROM #List ORDER BY CASE WHEN [Name] = 'X' THEN '1' WHEN [Name] = 'Y' THEN '2' ELSE [Name] END;
What I want is a simple output like this:
X
Y
A
B
C
D
J
I've tried various approaches, with CTEs, and intermediate steps, but I really just want ONE select, that I can throw into a STUFF, so I get the string "X, Y, A, B, C, D, J", and I can't find any way to preserve the REQUIRED order while doing that. This HAS to be possible, so I'm clearly missing something...
You should use a group by instead of distinct.
SELECT [Name]
FROM #List
group by [Name]
ORDER BY CASE WHEN [Name] = 'X' THEN '1' WHEN [Name] = 'Y' THEN '2' ELSE [Name] END ;
Unfortunately I do not have enough points to post a comment, so try to figure it out in an "answer".
You've mentioned that you should not include ordering column in the result set, so I assume that column defined in the table.
If so, then you can use a window functions for that purpose. Assume that column has a name Ordering_Column, then you may issue this:
select Name
from (
select Name
, row_number() over (partition by Name order by Ordering_Column) as Position
from SomeTable
order by Ordering_Column
) where Position = 1
This would return firstmost Name values from SomeTable sorted by Ordering_Column.

Issue with patindex and unicode character '-'

I have a string called Dats which is either of the general appearence xxxx-nnnnn (where x is a character, and n is a number) or nnn-nnnnnn.
I want to return only the numbers.
For this I've tried:
SELECT Distinct dats,
Left(SubString(artikelnr, PatIndex('%[0-9.-]%', artikelnr), 8000), PatIndex('%[^0-9.-]%', SubString(artikelnr, PatIndex('%[0-9.-]%', artikelnr), 8000) + 'X')-1)
FROM ThatDatabase
It is almost what I want. It removes the regular characters x, but it does not remove the unicode character -. How can I remove this as well? And also, it seems rather ineffective to have two PatIndex functions for every row, is there a way to avoid this? (This will be used on a big database where the result of this Query will be used as keys).
EDIT: Updated as a new database sometimes contained additional -'s or . together with -.
DECLARE #T as table
(
dats nvarchar(10)
)
INSERT INTO #T VALUES
('111BWA30'),
('115-200-11')
('115-22.4-1')
('10.000.22')
('600F-FFF200')
I wasn't sure if you wanted the numbers before the - char as well, but if you do, here is one way to do it:
Create and populate sample table (Please save us this step in your future questions)
DECLARE #T as table
(
dats nvarchar(10)
)
INSERT INTO #T VALUES
('abcde-1234'),
('23-343')
The query:
SELECT dats,
case when patindex('%[^0-9]-[0-9]%', dats) > 0 then
right(dats, len(dats) - patindex('%-[0-9]%', dats))
else
stuff(dats, charindex('-', dats), 1, '')
end As NumbersOnly
FROM #T
Results:
dats NumbersOnly
abcde-1234 1234
23-343 23343
If you want the only the numbers to the right of the - char, it's simpler:
SELECT dats,
right(dats, len(dats) - patindex('%-[0-9]%', dats)) As RightNumbersOnly
FROM #T
Results:
dats RightNumbersOnly
abcde-1234 1234
23-343 343
If you know which characters you need to remove then use REPLACE function
DECLARE #T as table
(
dats nvarchar(100)
)
INSERT INTO #T
VALUES
('111BWA30'),
('115-200-11'),
('115-22.4-1'),
('10.000.22'),
('600F-FFF200')
SELECT REPLACE(REPLACE(dats, '.', ''), '-', '')
FROM #T

How can I complete this Excel function in SQL Server?

I have approximately 30,000 records where I need to split the Description field and so far I can only seem to achieve this in Excel. An example Description would be:
1USBCP 2RJ45C6 1DVI 1DP 3MD 3MLP HANDS
Below is my Excel function:
=TRIM(MID(SUBSTITUTE($G309," ",REPT(" ",LEN($G309))),((COLUMNS($G309:G309)-1)*LEN($G309))+1,LEN($G309)))
This is then dragged across ten Excel columns, and splits the description field at each space.
I have seen many questions asked about splitting a string in SQL but they only seem to cover one space, not multiple spaces.
There is no easy function in SQL server to split strings. At least I don't know it. I use usually some trick that I found somewhere in the Internet some time ago. I modified it to your example.
The trick is that first we try to figure out how many columns do we need. We can do it by checking how many empty strings we have in the string. The easiest way is lenght of string - lenght of string without empty string.
After that for each string we try to find start and end of each word by position. At the end we cut simply string by start and end position and assign to coulmns. The details are in the query. Have fun!
CREATE TABLE test(id int, data varchar(100))
INSERT INTO test VALUES (1,'1USBCP 2RJ45C6 1DVI 1DP 3MD 3MLP HANDS')
INSERT INTO test VALUES (2,'Shorter one')
DECLARE #pivot varchar(8000)
DECLARE #select varchar(8000)
SELECT
#pivot=coalesce(#pivot+',','')+'[col'+cast(number+1 as varchar(10))+']'
FROM
master..spt_values where type='p' and
number<=(SELECT max(len(data)-len(replace(data,',',''))) FROM test)
SELECT
#select='
select p.*
from (
select
id,substring(data, start+2, endPos-Start-2) as token,
''col''+cast(row_number() over(partition by id order by start) as varchar(10)) as n
from (
select
id, data, n as start, charindex('','',data,n+2) endPos
from (select number as n from master..spt_values where type=''p'') num
cross join
(
select
id, '' '' + data +'' '' as data
from
test
) m
where n < len(data)-1
and substring(odata,n+1,1) = '','') as data
) pvt
Pivot ( max(token)for n in ('+#pivot+'))p'
EXEC(#select)
Here you can find example in SQL Fiddle
I didn't notice that you want to get rid of multiple blank spaces.
To do it please create some function that preprare your data :
CREATE FUNCTION dbo.[fnRemoveExtraSpaces] (#Number AS varchar(1000))
Returns Varchar(1000)
As
Begin
Declare #n int -- Length of counter
Declare #old char(1)
Set #n = 1
--Begin Loop of field value
While #n <=Len (#Number)
BEGIN
If Substring(#Number, #n, 1) = ' ' AND #old = ' '
BEGIN
Select #Number = Stuff( #Number , #n , 1 , '' )
END
Else
BEGIN
SET #old = Substring(#Number, #n, 1)
Set #n = #n + 1
END
END
Return #number
END
After that use the new version that removes extra spaces.
DECLARE #pivot varchar(8000)
DECLARE #select varchar(8000)
SELECT
#pivot=coalesce(#pivot+',','')+'[col'+cast(number+1 as varchar(10))+']'
FROM
master..spt_values where type='p' and
number<=(SELECT max(len(dbo.fnRemoveExtraSpaces(data))-len(replace(dbo.fnRemoveExtraSpaces(data),' ',''))) FROM test)
SELECT
#select='
select p.*
from (
select
id,substring(data, start+2, endPos-Start-2) as token,
''col''+cast(row_number() over(partition by id order by start) as varchar(10)) as n
from (
select
id, data, n as start, charindex('' '',data,n+2) endPos
from (select number as n from master..spt_values where type=''p'') num
cross join
(
select
id, '' '' + dbo.fnRemoveExtraSpaces(data) +'' '' as data
from
test
) m
where n < len(data)-1
and substring(data,n+1,1) = '' '') as data
) pvt
Pivot ( max(token)for n in ('+#pivot+'))p'
EXEC(#select)
I am probably not understanding your question, but all that you are doing in that formula, can be done almost exactly the same in SQL. I see someone has already answered but to my mind, how can it be necessary to do all that when you can do this. I might be wrong. But here goes.
declare #test as varchar(100)
set #test='abcd1234567'
select right(#test,2)
, left(#test,2)
, len(#test)
, case when len(#test)%2>0
then left(right(#test,round(len(#test)/2,0)+1),1)
else left(right(#test,round(len(#test)/2,0)+1),2) end
Results
67 ab 11 2
So right, left, length and mid can all be achieved.
If the spaces are the "substring" dividers, then: I dont remember well the actual syntax for do-while inside selects of sql, neither have i actually done that per se, but I don't see why it should not be possible. If it doesn't work then you need a temporary table and if that does not work you need a cursor. The cursor would be an external loop around this one to fetch and process a single string at a time. Or you can do something more clever. I am just a novice.
declare #x varchar(1)
declare #n integer
declare #i integer
declare #str varchar(100) -- this is your description. Fetch it and assign it. if in a cursor just use column-name
set #x = null
set #n = 0
set #i = 0
while n < len(#str)
while NOT #x = " "
begin
set #x = left(right(#str,n),1)
n = n+1
end
--insert into or update #temptable blablabla here.
Use i and n to locate substring and then left(right()) it out. or you can SELECT it, but that is a messy procedure if the number of substrings are long. Continue with:
set i = n
set #str = right(#str, i) -- this includes the " ". left() it out at will.
end
Now, a final comment, there should perhaps be a third loop checking for if you are at the last "substring" because I see now this code will throw error when it gets to the end. or "add" an empty space at the end to #str, that will also work. But my time is up. This is a suggestion at least.

Execute multiple dynamic T-SQL statements and obtain a limited number of unique values while preserving order

I have a SourceTable and a table variable #TQueries containing various T-SQL predicates that target SourceTable.
The expected result is to dynamically generate SELECT statements that return a list of Id's as specified by the predicates in #TQueries. Each dynamically generated SELECT statement also needs to execute in a particular order, and the final set of values needs to be unique and the ordering must be preserved.
Fortunately, there's a limit to how many values need to be retrieved and how many dynamic queries need to be generated. The Id list should contain at most 10 Ids, and we don't expect more than 7 queries.
The following is a sample of this setup, not the actual data/database:
-- Set up some test data, this is quick and dirty just to provide some data to test against
IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[SourceTable]') AND type in (N'U'))
BEGIN
-- Create a numbers table, sorta
SELECT TOP 20
IDENTITY(INT,1,1) AS Id,
ABS(CHECKSUM(NewId())) % 100 AS [SomeValue]
INTO [SourceTable]
FROM sysobjects a
END
DECLARE #TQueries TABLE (
[Ordinal] INT,
[WherePredicate] NVARCHAR(MAX),
[OrderByPredicate] NVARCHAR(MAX)
);
-- Simulate SELECTs with different order by that get different data due to varying WHERE clauses and ORDER conditions
INSERT INTO #TQueries VALUES ( 1, N'[Id] IN (6,11,13,7,10,3,15)', '[SomeValue] ASC' ) -- Sort Asc
INSERT INTO #TQueries VALUES ( 2, N'[Id] IN (9,15,14,20,17)', '[SomeValue] DESC' ) -- Sort Desc
INSERT INTO #TQueries VALUES ( 3, N'[Id] IN (20,10,1,16,11,19,9,15,17,6,2,3,13)', 'NEWID()' ) -- Sort Random
My main issue has been avoiding the use of a CURSOR or iterating through the rows one by one. The closest I've come to a set operation that meets this criteria is using a table variable to store the results of each query or a massive CTE.
Suggestions and comments are welcome.
Here's a solution that builds a single statement both to run all the queries and to return the results.
It uses a similar approach as in your answer when iterating over the #TQueries table, i.e. it also uses {...} tokens where column values from #TQuery should go, and it puts the values there with nested REPLACE() calls.
Other than that, it heavily depends on ranking functions, and I'm not sure if doesn't really abuse them. You'd need to test this method before deciding if it's better or worse than the one you've got so far.
DECLARE #QueryTemplate nvarchar(max), #FinalSQL nvarchar(max);
SET #QueryTemplate =
N'SELECT
[Id],
QueryRank = {Ordinal},
RowRank = ROW_NUMBER() OVER (ORDER BY {OrderByPredicate})
FROM [dbo].[SourceTable]
WHERE {WherePredicate}
';
SET #FinalSQL =
N'WITH AllData AS (
' +
SUBSTRING(
(
SELECT
'UNION ALL ' +
REPLACE(REPLACE(REPLACE(#QueryTemplate,
'{Ordinal}' , [Ordinal] ),
'{OrderByPredicate}', [OrderByPredicate]),
'{WherePredicate}' , [WherePredicate] )
FROM #TQueries
ORDER BY [Ordinal]
FOR XML PATH (''), TYPE
).value('.', 'nvarchar(max)'),
11, -- starting just after the first 'UNION ALL '
CAST(0x7FFFFFFF AS int) -- max int; no need to specify the exact length
) +
'),
RankedData AS (
SELECT
[Id],
QueryRank,
RowRank,
ValueRank = ROW_NUMBER() OVER (PARTITION BY [Id] ORDER BY QueryRank)
FROM AllData
)SELECT TOP (#top)
[Id]
FROM RankedData
WHERE ValueRank = 1
ORDER BY
QueryRank,
RowRank
';
PRINT #FinalSQL;
EXECUTE sp_executesql #FinalSQL, N'#top int', 10;
Basically, every subquery gets these auxiliary columns:
QueryRank – a constant value (within the subquery's result set) derived from [Ordinal];
RowRank – a ranking assigned to a row based on the [OrderByPredicate].
The result sets are UNIONed and then every entry of every unique value is again ranked (ValueRank) based on the query ranking.
When pulling the final result set, duplicates are suppressed (by the condition ValueRank = 1), and QueryRank and RowRank are used in the ORDER BY clause to preserve the original row order.
I used EXECUTE sp_executesql #query instead of EXECUTE (#query), because the former allows you to add parameters to the query. In particular, I parametrised the number of results to return (the argument of TOP). But you could certainly concatenate that value into the dynamic script directly, just like other things, if you prefer EXECUTE () over EXECUTE sq_executesql.
If you like, you can try this query at SQL Fiddle. (Note: the SQL Fiddle version replaces the #TQueries table variable with the TQueries table.)
This is what I've managed to piece together cobbled from my original response and improved upon by comments from #AndriyM
DECLARE #sql_prefix NVARCHAR(MAX);
SET #sql_prefix =
N'DECLARE #TResults TABLE (
[Ordinal] INT IDENTITY(1,1),
[ContentItemId] INT
);
DECLARE #max INT, #top INT;
SELECT #max = 10;';
DECLARE #sql_insert_template NVARCHAR(MAX), #sql_body NVARCHAR(MAX);
SET #sql_insert_template =
N'SELECT #top = #max - COUNT(*) FROM #TResults;
INSERT INTO #TResults
SELECT TOP (#top) [Id]
FROM [dbo].[SourceTable]
WHERE
{WherePredicate}
AND NOT EXISTS (
SELECT 1
FROM #TResults AS [tr]
WHERE [tr].[ContentItemId] = [SourceTable].[Id]
)
ORDER BY {OrderByPredicate};';
WITH Query ([Ordinal],[SqlCommand]) AS (
SELECT
[Ordinal],
REPLACE(REPLACE(#sql_insert_template, '{WherePredicate}', [WherePredicate]), '{OrderByPredicate}', [OrderByPredicate])
FROM #TQueries
)
SELECT
#sql_body = #sql_prefix + (
SELECT [SqlCommand]
FROM Query
ORDER BY [Ordinal] ASC
FOR XML PATH(''),TYPE).value('.', 'varchar(max)') + CHAR(13)+CHAR(10)
+N' SELECT * FROM #TResults ORDER BY [Ordinal]';
EXEC(#sql_body);
The basic idea is to use a table variable to hold the results of each query. I create a template for the SQL and replace the values in the template based on what is stored in #TQueries.
Once the entire script is completed I run it with EXEC.

Resources