Converting Oracle statement to SQL Server format - sql-server

Below is an Oracle script that I need to execute on an SQL Server.
SELECT
records.pr_id,
SUBSTR (REPLACE (REPLACE (XMLAGG (XMLELEMENT ("x", prad4.selection_value)
ORDER BY prad4.selection_value),'</x>'),'<x>',' ; '),4) as teva_role
FROM records
Thanks for the help,
Barry

I programmed in SQL for years in several environments and it is about 75% the same. So, the SQL statement should work as is, however the functions (REPLACE, SUBSTR) will be what you need to research and change.
Also, you get columns from prad4 without including it in the FROM statement which is a problem.
And, finally, your parentheses aren't balanced which, I would think, would be a problem in Oracle as well.

This is basically concatenating a set of strings with a delimiter. The common way to do this, is using FOR XML PATH('') which seems to be the equivalent of the combination of XMLELEMENT() in Oracle, but with a different syntax. You can also use XML functions to prevent change of certain characters not allowed in XML. The STUFF takes care of the SUBSTR() part of your code. For a more detailed explanation, you can read this article on Creating a comma-separated list.
The code should look similar to this:
SELECT records.pr_id,
STUFF(( SELECT ' ; ' + prad4.selection_value
FROM prad4
WHERE prad4.pr_id = records.pr_id
ORDER BY prad4.selection_value
FOR XML PATH(''), TYPE).value('./text()[1]', 'varchar(max)'), 1, 3, '')
FROM records;
Of course, with the improvements of SQL Server 2017, the code can be simplified to something like this:
SELECT records.pr_id,
STRING_AGG( selection_value, ' ; ') WITHIN GROUP (ORDER BY selection_value ASC)
FROM records;

Related

T-SQL: LIKE operator, compare string to one value

Is there a way to compare multiple values in one column to a single value in another column.
Example:
Column A contains: [a;b;c;d]
Column B contains: [a]
At the moment I'm using the LIKE operator to achieve this but not result. I tried it with a wildcard % but I get no match because of the ;.
As Larnu suggested, the real fix here is to fix the design. You should go back to the owners and remind them that the database is for storing relational data; if you're jamming multiple "facts" into a single column, you may as well be using a flat file. The exception is if you are storing a comma-separated list for the application and only the application is responsible for assembling and exploding that set.
Anyway, given that you are probably stuck with this (and let's say ColumnA is limited to 128 characters):
CREATE TABLE dbo.BadDesign
(
ColumnA nvarchar(128),
ColumnB nvarchar(max)
);
INSERT dbo.BadDesign(ColumnA, ColumnB) VALUES
(N'[a]', N'[a;b;c;d]'), -- only match
(N'[p]', N'[q;r;s]'),
(N'[h]', N'[hi;j;k]');
You can see the following solutions demonstrated in turn in this db<>fiddle:
Nested Replace
In the old days before (SQL Server 2017), we would perform nested REPLACE() calls to get rid of the square brackets and replace each end of the string with delimiters:
-- All versions
SELECT ColumnA, ColumnB
FROM dbo.BadDesign
WHERE REPLACE(REPLACE(ColumnB, N'[',N';'),N']', N';')
LIKE N'%' + REPLACE(REPLACE(ColumnA ,N'[',N';'),N']', N';') + N'%';
Gross, but results:
ColumnA
ColumnB
[a]
[a;b;c;d]
We can't use TRIM() on versions prior to SQL Server 2017, but I explain below why we don't want to use that function on modern versions anyway.
OpenJson
In 2016+ we can use OPENJSON after a little manipulation to the string. And here I use a PARSENAME() trick which is only safe if the ColumnA <= 128 characters. I show other workarounds in this db<>fiddle:
SELECT b.ColumnA, b.ColumnB
FROM dbo.BadDesign AS b
CROSS APPLY OPENJSON(REPLACE(REPLACE(REPLACE(
b.ColumnB, N'[', N'["'), N']', N'"]"'), N';', N'","')) AS j
WHERE j.value = PARSENAME(b.ColumnA, 1);
Results:
ColumnA
ColumnB
[a]
[a;b;c;d]
Translate
In SQL Server 2017, it can be a little less gross with TRANSLATE():
-- SQL Server 2017+
SELECT ColumnA, ColumnB
FROM dbo.BadDesign
WHERE TRANSLATE(ColumnB, N'[]',N';;')
LIKE N'%' + TRANSLATE(ColumnA, N'[]',N';;') + N'%';
ColumnA
ColumnB
[a]
[a;b;c;d]
We don't want to use TRIM() here because we don't simply want to remove the enclosing square brackets; we want delimiters there so we can always compare A to B regardless of where B is in the string. Without surrounding delimiters replaced or translated, we could get inaccurate results if the match is at the beginning or end of the multi-value string.
Split Function
Alternatively, you could create this function on SQL Server 2016+ (or a messier one that doesn't use STRING_SPLIT() in earlier versions - as Smor noted, a search will turn up hundreds of those):
CREATE FUNCTION dbo.SplitAndClean(#s nvarchar(max))
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN
(
SELECT value
FROM STRING_SPLIT
(
-- if 2016:
REPLACE(REPLACE(#s, N'[',N';'),N']', N';'),
-- if 2017+, TRANSLATE() is slightly cleaner:
/* TRANSLATE(#s, N'[]',N';;'), */
N';'
)
WHERE value > N''
);
Then you can say:
SELECT bd.ColumnA, bd.ColumnB
FROM dbo.BadDesign AS bd
CROSS APPLY dbo.SplitAndClean(bd.ColumnA) AS a
CROSS APPLY dbo.SplitAndClean(bd.ColumnB) AS b
WHERE a.value = b.value;
ColumnA
ColumnB
[a]
[a;b;c;d]
But in the end...
...these are all gross "solutions" masking bad design, and you should really have them reconsider how they're using the database.
I know that many shops can't just switch to passing sets between the app and the database using TVPs, because several client providers and ORMs haven't quite had more than a decade to catch that train. If you can't use TVPs or can't change the app, you should at least consider intercepting the comma-separated list passed by the app and break it apart using SPLIT_STRING() or the like. Then you can store the values relationally and let the database do what the database was designed to do, without being handcuffed by app limitations.
If there will be always only one value in col_b like in your example, you can user nested replace function to remove [ and ] and then use "like" for search
select *
from test_data
where col_a like '%' + replace(replace(col_b, '[', ''), ']', '') + '%';
But
if there could be more than value in col_b and it could be in any order (e.g. "[a;c]" or "[d;a]") you'll find answer among already answered questions or you may google for "string_split()" function on msdn. The latter has great examples section that will definitely help you out

How to use an evaluated expression in SQL IN clause

I have an application that takes a comma separated string for multiple IDs to be used in the 'IN' clause of a SQL query.
SELECT * FROM TABLE WHERE [TABLENAME].[COLUMNNAME]
IN
((SELECT '''' + REPLACE('PARAM(0, Enter ID/IDS. Separate multiple ids by
comma., String)', char(44), ''',''') + ''''))
I have tested that PARAM gets the string entered e.g. 'ID1, ID2' but SELECT/REPLACE does not execute. The statement becomes,
SELECT * FROM TABLE WHERE [TABLENAME].[COLUMNNAME]
IN
((SELECT '''' + REPLACE('ID1,ID2', char(44), ''',''') + ''''))
I am trying to achieve,
SELECT * FROM TABLE WHERE [TABLENAME].[COLUMNNAME]
IN ('ID1', 'ID2')
The query does not return any results/errors. I am confident the corresponding records are in the database I am working with. Not sure how to fix this.
You can't do it like this. The IN operator expects a list of parameters separated by comma, but you supply it with a single parameter that happens to contain a comma delimited string:
If you are working on SQL Server version 2016 or higher, you can use the built in string_split to convert the delimited string into a table.
SELECT *
FROM TABLE
WHERE [TABLENAME].[COLUMNNAME]
IN STRING_SPLIT(#CommaDelimitedString, ',')
For older versions, there are multiple user defined functions you can choose from, my personal favorite is Jeff Moden's DelimitedSplit8K. For more options, read Aaron Bertrand's Split strings the right way – or the next best way.

Using FOR XML PATH, TYPE).value('.[1]','nvarchar(max)')

I was checking that using this is good to handle special characters but at the same time is over complicating the query generating a "cardinality estimate warning"
If I use FOR XML PATH(''), the query plan is much better and the cardinatity is gone. Anybody faced this issue before? is there any workaround to continue using FOR XML PATH, TYPE).value('.[1]','nvarchar(max)') and get rid of the cardinality issue?
SELECT r.ServiceId,
STUFF(
(
SELECT '; ' + u.Name
FROM dbo.UsedFor u
inner join dbo.ServiceUsedRelation r2
on u.UsedId = r2.UsedId
where
r2.ServiceId = r.ServiceId
FOR XML PATH, TYPE).value('.[1]','nvarchar(max)')
, 1
, 1
, ''
) as Name
FROM dbo.ServiceUsedRelation r
GROUP BY r.ServiceId
Stop using FOR XML PATH for concatenation. If you are using SQL Server 2017 you can use STRING_AGG. If not, you can implement the SQL String Utility Functions - look for the Concatenate class. More information ca be found here.
Having function that concatenate strings but it is aggregate at the same time, gives you the ability to write more complex grouping queries. It also simplify the used T-SQL syntax and improve performance.
For example, your query will looks like:
SELECT ServiceId
,[dbo].[Concatenate] (Name)
FROM dbo.ServiceUsedRelation
GROUP BY ServiceId;

Need Help Converting Oracle Query to SQL Server

Several weeks ago I made a post to get help with converting a comma delimited list of values into a format that it could be used as part of an IN clause in Oracle. Here is a link to the post.
Oracle invalid number in clause
The answer was to split up the list into an individual row for each value. Here's the answer that I ended up using.
SELECT trim(regexp_substr(str, '[^,]+', 1, LEVEL)) str
FROM ( SELECT '1,2,3,4' str FROM dual )
CONNECT BY instr(str, ',', 1, LEVEL - 1) > 0
Is there a way that I can do something similar in SQL Server without having to create a custom function? I noticed that there's a STRING_SPLIT function, but I don't seem to have access to that on this SQL Server.
Any advice you might have would be greatly appreciated. I've been trying to take a stab at this for the majority of the day.
String_split function is available in MS SQL Server starting from version 2016. If you use older version you can write a few lines of code which do the same.
declare #str varchar(100)='1,2,3,4' --initial string
;with cte as (--build xml from the string
select cast('<s>'+replace(#str,',','</s><s>')+'</s>' as xml) x
)
--receive rows
select t.v.value('.[1]','int') value
from cte cross apply cte.x.nodes('s') t(v)

What is a good way to paginate out of SQL 2000 using Start and Length parameters?

I have been given the task of refactoring an existing stored procedure so that the results are paginated. The SQL server is SQL 2000 so I can't use the ROW_NUMBER method of pagination. The Stored proc is already fairly complex, building chunks of a large sql statement together before doing an sp_executesql and has various sorting options available.
The first result out of google seems like a good method but I think the example is wrong in that the 2nd sort needs to be reversed and the case when the start is less than the pagelength breaks down. The 2nd example on that page also seems like a good method but the SP is taking a pageNumber rather than the start record. And the whole temp table thing seems like it would be a performance drain.
I am making progress going down this path but it seems slow and confusing and I am having to do quite a bit of REPLACE methods on the Sort order to get it to come out right.
Are there any other easier techniques I am missing?
There are two SQL Server 2000 compliant answers in this StackOverflow question - skip the accepted one, which is 2005-only:
No, I'm afraid not - SQL Server 2000 doesn't have any of the 2005 niceties like Common Table Expression (CTE) and such..... the method described in the Google link seems to be one way to go.
Marc
Also take a look here
http://databases.aspfaq.com/database/how-do-i-page-through-a-recordset.html
scroll down to Stored Procedure Methods
Depending on your application architecture (and your amount of data, it's structure, DB server load etc.) you could use the DB access layer for paging.
For example, with ADO you can define a page size on the record set (DataSet in ADO.NET) object and do the paging on the client. Classic ADO even lets you use a server side cursor, though I don't know if that scales well (I think this was removed altogether in ADO.NET).
MSDN documentation: Paging Through a Query Result (ADO.NET)
After playing with this for a while there seems to be only one way of really doing this (using Start and Length parameters) and that's with the temp table.
My final solution was to not use the #start parameter and instead use a #page parameter and then use the
SET #sql = #sql + N'
SELECT * FROM
(
SELECT TOP ' + Cast( #length as varchar) + N' * FROM
(
SELECT TOP ' + Cast( #page*#length as varchar) + N'
field1,
field2
From Table1
order by field1 ASC
) as Result
Order by Field1 DESC
) as Result
Order by Field 1 ASC'
The original query was much more complex than what is shown here and the order by was ordered on at least 3 fields and determined by a long CASE clause, requiring me to use a series of REPLACE functions to get the fields in the right order.
We've been using variations on this query for a number of years. This example gives items 50,000 to 50,300.
select top 300
Items.*
from Items
where
Items.CustomerId = 1234 AND
Items.Active = 1 AND
Items.Id not in
(
select top 50000 Items.Id
from Items
where
Items.CustomerId = 1234 AND
Items.Active = 1
order by Items.id
)
order by Items.Id

Resources