Can I use dynamically LIKE and IN together? - sql-server

I want to be able to say :
SELECT * FROM myTable WHERE accountName LIKE('%john%', '%bill%', '%lory%'.....)
I want that to be dynamically, which means depend on user input the list of '%name%' parts will be different. One time could have 3 names and another probably just 1.

You could use JOIN:
SELECT DISTINCT myTable.*
FROM myTable
JOIN (SELECT '%john%' UNION ALL
SELECT '%bill%' UNION ALL
SELECT '%lory%') sub(c) -- this could be anything table variable/temp
ON myTable.accountName LIKE sub.c;
Keep in mind that '%...%' is not SARG-able.
With table variable:
DECLARE #tab AS TABLE (c NVARCHAR(100));
INSERT INTO #tab(c) VALUES ('...');
-- ...
SELECT DISTINCT myTable.*
FROM myTable
JOIN #tab t
ON myTable.accountName LIKE t.c;

WHERE accountName LIKE('%john%', '%bill%', '%lory%'.....)
This is invalid syntax and won't work. The easiest way to do what you're trying to do would be to use a "string splitter" function like Jeff Moden's DelimitedSplit8K
DECLARE #Names VARCHAR(1000) = 'john, bill, lory';
SELECT
*
FROM
dbo.myTable mt
CROSS APPLY dbo.DelimitedSplit8K(#Names, ',') dsk
WHERE
mt.accountname LIKE '%' + dsk.Item + '%';
OR
SELECT
*
FROM
dbo.myTable mt
WHERE
EXISTS (
SELECT 1
FROM dbo.DelimitedSplit8K(#Names, ',') dsk
WHERE mt.accountname LIKE '%' + dsk.Item + '%'
);
HTH, Jason

Building off of Jason and Lad's very excellent solutions you could speed things up with an indexed view.
Before I continue - it's important to note that this will slow down inserts/updates/deletes in high-traffic OLTP environments that modify these tables often. TEST, TEST, TEST!.
I work in the Data Warehouse world where this works out quite nicely.
First a keyword table for common search terms:
CREATE TABLE dbo.keyword (kw varchar(100) not null, constraint uq_keyword unique clustered(kw));
INSERT dbo.keyword VALUES ('john'), ('bill'), ('lory');
Next for your table:
CREATE TABLE dbo.mytable(someid int identity not null, accountname varchar(100));
INSERT dbo.mytable(accountname) VALUES
('John''s Flowers'), ('Candles by Bill'), ('Some other account'), ('The Lory Group LLC');
Now the view along with a couple important indexes:
CREATE VIEW dbo.vw_mytable_fastKWLookup
WITH SCHEMABINDING AS -- schemabinding required for indexed views
SELECT
t.someid,
t.accountname,
k.kw
FROM dbo.mytable t
CROSS JOIN dbo.keyword k
WHERE t.accountname LIKE '%'+k.kw+'%';
GO
-- required unique clustered index
CREATE UNIQUE CLUSTERED INDEX uq_cl_vw_mytable_fastKWLookup
ON dbo.vw_mytable_fastKWLookup(kw, someid);
-- A good nonclustered index
CREATE NONCLUSTERED INDEX nc_vw_mytable_fastKWLookup__kw
ON dbo.vw_mytable_fastKWLookup(kw) include (someid, accountname);
Once your indexed view is in place you can add your search words to an in list like so:
SELECT someid, accountname, kw
FROM vw_mytable_fastKWLookup WITH (NOEXPAND)
WHERE kw IN ('John', 'Lory', 'Bill', 'Fred');
Results:
someid accountname kw
----------- -------------------- -----
1 John's Flowers john
2 Candles by Bill bill
4 The Lory Group LLC lory
The reward here is an execution plan that does a non-clustered index seek for %searchstring% style searches.

Related

Query tuning required for expensive query

Can someone help me to optimize the code? I have other way to optimize it by using compute column but we can not change the schema on prod as we are not sure how many API's are used to push data into this table. This table has millions of rows and adding a non-clustered index is not helping due to the query cost and it's going for a scan.
create table testcts(
name varchar(100)
)
go
insert into testcts(
name
)
select 'VK.cts.com'
union
select 'GK.ms.com'
go
DECLARE #list varchar(100) = 'VK,GK'
select * from testcts where replace(replace(name,'.cts.com',''),'.ms.com','') in (select value from string_split(#list,','))
drop table testcts
One possibility might be to strip off the .cts.com and .ms.com subdomain/domain endings before you insert or store the name data in your table. Then, use the following query instead:
SELECT *
FROM testcts
WHERE name IN (SELECT value FROM STRING_SPLIT(#list, ','));
Now SQL Server should be able to use an index on the name column.
If your values are always suffixed by cts.com or ms.com you could add that to the search pattern:
SELECT {YourColumns} --Don't use *
FROM dbo.testcts t
JOIN (SELECT CONCAT(SS.[value], V.Suffix) AS [value]
FROM STRING_SPLIT(#list, ',') SS
CROSS APPLY (VALUES ('.cts.com'),
('.ms.com')) V (Suffix) ) L ON t.[name] = L.[value];

Join a table whose name is stored in the first table

I have a first table [TABLE1] with columns [ID], [Description], [DetailTable]. I want to join [TABLE1] with the [DetailTable]. The name of [DetailTable] is stored in [TABLE1] column.
"SELECT * FROM TABLE1 CROSS JOIN ?????"
Any suggestions?
So... if you cheat and SELECT * from the detailtab, you could do something a bit like this, with dynamic SQL:
-- For the example, choose either 1 or 2 to see the #foo table or the #bar table
DECLARE #Id INT = 1
-- EXAMPLE SCENARIO SETUP --
CREATE TABLE #ListOfTables
( ID INT IDENTITY(1,1) NOT NULL
,[Description] NVARCHAR(255) NOT NULL
,[DetailTable] NVARCHAR(255) NOT NULL);
CREATE TABLE #foo
(Foothing VARCHAR(20));
CREATE TABLE #bar
(Barthing VARCHAR(20));
-- TEST VALUES --
INSERT #ListOfTables VALUES ('foo','#foo'),('bar','#bar');
INSERT #foo VALUES ('A foothing Foothing');
INSERT #bar VALUES ('A barthing Barthing');
-- THE SCRIPT --
DECLARE #SQL NVARCHAR(MAX) = '';
SELECT #SQL =
' SELECT Tab.Id, Tab.[Description], Tab2.*
FROM #ListOfTables AS Tab
CROSS JOIN ' + T.DetailTable + ' AS Tab2
WHERE Tab.Id = ' + CONVERT(VARCHAR(10),#Id)
FROM #ListOfTables T
WHERE T.Id = #Id;
PRINT #SQL
EXEC sp_executesql #SQL;
-- CLEAN UP --
DROP TABLE #ListOfTables;
DROP TABLE #bar;
DROP TABLE #foo;
However, I have to agree with the comments that this is a pretty nasty way to do things. If you want to choose particular columns and the columns are different for each detail table, then things will start to get really unpleasant... Does this give you something you can start with?
Remember, the best solution will almost certainly involve redesigning things so you don't have to jump through these hoops!
All of the detail tables must have identical schema.
Create a view that unions all the tables
CREATE VIEW vAllDetails AS
SELECT 'DETAIL1' table_name, * from DETAIL1
UNION ALL
SELECT 'DETAIL2' table_name, * from DETAIL2
UNION ALL
SELECT 'DETAIL3' table_name, * from DETAIL3
When you join against this view, SQL Server can generate a plan that uses a "startup predicate expression". For example, a plan like this: sample plan. At first glance, it looks like SQL is going to scan all of the detail tables, but it won't. The left most filters include a "startup predicate", so for each row we read from table1, only if TableName matches will that branch be executed.

Keyset Pagination - Filter By Search Term across Multiple Columns

I'm trying to move away from OFFSET/FETCH pagination to Keyset Pagination (also known as Seek Method). Since I'm just started, there are many questions I have in my mind but this is one of many where I try to get the pagination right along with Filter.
So I have 2 tables
aspnet_users
having columns
PK
UserId uniquidentifier
Fields
UserName NVARCHAR(256) NOT NULL,
AffiliateTag varchar(50) NULL
.....other fields
aspnet_membership
having columns
PK+FK
UserId uniquidentifier
Fields
Email NVARCHAR(256) NOT NULL
.....other fields
Indexes
Non Clustered Index on Table aspnet_users (UserName)
Non Clustered Index on Table aspnet_users (AffiliateTag)
Non Clustered Index on Table aspnet_membership(Email)
I have a page that will list the users (based on search term) with page size set to 20. And I want to search across multiple columns so instead of doing OR I find out having a separate query for each and then Union them will make the index use correctly.
so have the stored proc that will take search term and optionally UserName and UserId of last record for next page.
Create proc [dbo].[sp_searchuser]
#take int,
#searchTerm nvarchar(max) NULL,
#lastUserName nvarchar(256)=NULL,
#lastUserId nvarchar(256)=NULL
AS
IF(#lastUserName IS NOT NULL AND #lastUserId IS NOT NULL)
Begin
select top (#take) *
from
(
select u.UserId, u.UserName, u.AffiliateTag, m.Email
from aspnet_Users as u
inner join aspnet_Membership as m
on u.UserId=m.UserId
where u.UserName like #searchTerm
UNION
select u.UserId, u.UserName, u.AffiliateTag, m.Email
from aspnet_Users as u
inner join aspnet_Membership as m
on u.UserId=m.UserId
where u.AffiliateTag like convert(varchar(50), #searchTerm)
) as u1
where u1.UserName > #lastUserName
OR (u1.UserName=#lastUserName And u1.UserId > convert(uniqueidentifier, #lastUserId))
order by u1.UserName
End
Else
Begin
select top (#take) *
from
(
select u.UserId, u.UserName, u.AffiliateTag, m.Email
from aspnet_Users as u
inner join aspnet_Membership as m
on u.UserId=m.UserId
where u.UserName like #searchTerm
UNION
select u.UserId, u.UserName, u.AffiliateTag, m.Email
from aspnet_Users as u
inner join aspnet_Membership as m
on u.UserId=m.UserId
where u.AffiliateTag like convert(varchar(50), #searchTerm)
) as u1
order by u1.UserName
End
Now to get the result for first page with search term mua
exec [sp_searchuser] 20, 'mua%'
it uses both indexes created one for UserName column and another for AffiliateTag column which is good
But the problem is I find the inner union queries return all the matching rows
like in this case, the execution plan shows
UserName Like SubQuery
Number of Rows Read= 5
Actual Number of Rows= 4
AffiliateTag Like SubQuery
Number of Rows Read= 465
Actual Number of Rows= 465
so in total inner queries return 469 matching rows
and then outer query take out 20 for final result reset. So really reading more data than needed.
And when go to next page
exec [sp_searchuser] 20, 'mua%', 'lastUserName', 'lastUserId'
the execution plan shows
UserName Like SubQuery
Number of Rows Read= 5
Actual Number of Rows= 4
AffiliateTag Like SubQuery
Number of Rows Read= 465
Actual Number of Rows= 445
in total inner queries return 449 matching rows
so either with or without pagination, it reads more data than needed.
My expectation is to somehow limit the inner queries so it does not return all matching rows.
You might be interested in the Logical Processing Order, which determines when the objects defined in one step are made available to the clauses in subsequent steps. The Logical Processing Order steps are:
FROM
ON
JOIN
WHERE
GROUP BY
WITH CUBE or WITH ROLLUP
HAVING
SELECT
DISTINCT
ORDER BY
TOP
Of course, as noted the docs:
The actual physical execution of the statement is determined by the
query processor and the order may vary from this list.
meaning that sometimes some statements can start before previous complete.
In your case, you query looks like:
some data extraction
sort by user_name
get TOP records
There is no way to reduce the rows in the data extraction part as to have a deterministic result (we actually may need to order by user_name, user_id to have such) we need to get all matching rows, sort them and then get the desired rows.
For example, image the first query returning 20 names starting with 'Z'. And the second query to returned only one name starting with 'A'. If you stop somehow the execution and skip the second query, you will get wrong results - 20 names starting with 'Z' instead one starting with 'A' and 19 with 'Z'.
In such cases, I prefer to use dynamic T-SQL statements in order to get better execution times and reduce the code length. You are saying:
And I want to search across multiple columns so instead of doing OR I
find out having a separate query for each and then Union them will
make the index use correctly.
When you are using UNION you are performing double reads to your tables. In your cases, you are reading the aspnet_Membership table twice and the aspnet_Users twice (yes, here you are using two different indexes but I believe they are not covering and you end up performing look ups to extract the users name and email.
I guess you have started with covering indexed like in the example below:
DROP TABLE IF EXISTS [dbo].[StackOverflow];
CREATE TABLE [dbo].[StackOverflow]
(
[UserID] INT PRIMARY KEY
,[UserName] NVARCHAR(128)
,[AffiliateTag] NVARCHAR(128)
,[UserEmail] NVARCHAR(128)
,[a] INT
,[b] INT
,[c] INT
,[z] INT
);
CREATE INDEX IX_StackOverflow_UserID_UserName_AffiliateTag_I_UserEmail ON [dbo].[StackOverflow]
(
[UserID]
,[UserName]
,[AffiliateTag]
)
INCLUDE ([UserEmail]);
GO
INSERT INTO [dbo].[StackOverflow] ([UserID], [UserName], [AffiliateTag], [UserEmail])
SELECT TOP (1000000) ROW_NUMBER() OVER(ORDER BY t1.number)
,CONCAT('UserName',ROW_NUMBER() OVER(ORDER BY t1.number))
,CONCAT('AffiliateTag', ROW_NUMBER() OVER(ORDER BY t1.number))
,CONCAT('UserEmail', ROW_NUMBER() OVER(ORDER BY t1.number))
FROM master..spt_values t1
CROSS JOIN master..spt_values t2;
GO
So, for the following query:
SELECT TOP 20 [UserID]
,[UserName]
,[AffiliateTag]
,[UserEmail]
FROM [dbo].[StackOverflow]
WHERE [UserName] LIKE 'UserName200%'
OR [AffiliateTag] LIKE 'UserName200%'
ORDER BY [UserName];
GO
The issue here is we are reading all the rows even we are using the index.
What's good is that the index is covering and we are not performing look ups. Depending on the search criteria it may perform better than your approach.
If the performance is bad, we can use a trigger to UNPIVOT the original data and record in a separate table. It may look like this (it will be better to use attribute_id rather than the text like me):
DROP TABLE IF EXISTS [dbo].[StackOverflowAttributes];
CREATE TABLE [dbo].[StackOverflowAttributes]
(
[UserID] INT
,[AttributeName] NVARCHAR(128)
,[AttributeValue] NVARCHAR(128)
,PRIMARY KEY([UserID], [AttributeName], [AttributeValue])
);
GO
CREATE INDEX IX_StackOverflowAttributes_AttributeValue ON [dbo].[StackOverflowAttributes]
(
[AttributeValue]
)
INSERT INTO [dbo].[StackOverflowAttributes] ([UserID], [AttributeName], [AttributeValue])
SELECT [UserID]
,'Name'
,[UserName]
FROM [dbo].[StackOverflow]
UNION
SELECT [UserID]
,'AffiliateTag'
,[AffiliateTag]
FROM [dbo].[StackOverflow];
and the query before will looks like:
SELECT TOP 20 U.[UserID]
,U.[UserName]
,U.[AffiliateTag]
,U.[UserEmail]
FROM [dbo].[StackOverflowAttributes] A
INNER JOIN [dbo].[StackOverflow] U
ON A.[UserID] = U.[UserID]
WHERE A.[AttributeValue] LIKE 'UserName200%'
ORDER BY U.[UserName];
Now, we are reading only a part of the the index rows and after that performing a lookup.
In order to compare performance it will be better to use:
SET STATISTICS IO, TIME ON;
as it will give you how pages are read from the indexes. The result can be visualized here.

How to delete Duplicate records in snowflake database table

how to delete the duplicate records from snowflake table. Thanks
ID Name
1 Apple
1 Apple
2 Apple
3 Orange
3 Orange
Result should be:
ID Name
1 Apple
2 Apple
3 Orange
Adding here a solution that doesn't recreate the table. This because recreating a table can break a lot of existing configurations and history.
Instead we are going to delete only the duplicate rows and insert a single copy of each, within a transaction:
-- find all duplicates
create or replace transient table duplicate_holder as (
select $1, $2, $3
from some_table
group by 1,2,3
having count(*)>1
);
-- time to use a transaction to insert and delete
begin transaction;
-- delete duplicates
delete from some_table a
using duplicate_holder b
where (a.$1,a.$2,a.$3)=(b.$1,b.$2,b.$3);
-- insert single copy
insert into some_table
select *
from duplicate_holder;
-- we are done
commit;
Advantages:
Doesn't recreate the table
Doesn't modify the original table
Only deletes and inserts duplicated rows (good for time travel storage costs, avoids unnecessary reclustering)
All in a transaction
If you have some primary key as such:
CREATE TABLE fruit (key number, id number, name text);
insert into fruit values (1,1, 'Apple'), (2,1,'Apple'),
(3,2, 'Apple'), (4,3, 'Orange'), (5,3, 'Orange');
as then
DELETE FROM fruit
WHERE key in (
SELECT key
FROM (
SELECT key
,ROW_NUMBER() OVER (PARTITION BY id, name ORDER BY key) AS rn
FROM fruit
)
WHERE rn > 1
);
But if you do not have a unique key then you cannot delete that way. At which point a
CREATE TABLE new_table_name AS
SELECT id, name FROM (
SELECT id
,name
,ROW_NUMBER() OVER (PARTITION BY id, name) AS rn
FROM table_name
)
WHERE rn > 1
and then swap them
ALTER TABLE table_name SWAP WITH new_table_name
Here's a very simple approach that doesn't need any temporary tables. It will work very nicely for small tables, but might not be the best approach for large tables.
insert overwrite into some_table
select distinct * from some_table
;
The OVERWRITE keyword means that the table will be truncated before the insert takes place.
Snowflake does not have effective primary keys, their use is primarily with ERD tools.
Snowflake does not have something like a ROWID either, so there is no way to identify duplicates for deletion.
It is possible to temporarily add a "is_duplicate" column, eg. numbering all the duplicates with the ROW_NUMBER() function, and then delete all records with "is_duplicate" > 1 and finally delete the utility column.
Another way is to create a duplicate table and swap, as others have suggested.
However, constraints and grants must be kept. One way to do this is:
CREATE TABLE new_table LIKE old_table COPY GRANTS;
INSERT INTO new_table SELECT DISTINCT * FROM old_table;
ALTER TABLE old_table SWAP WITH new_table;
The code above removes exact duplicates. If you want to end up with a row for each "PK" you need to include logic to select which copy you want to keep.
This illustrates the importance to add update timestamp columns in a Snowflake Data Warehouse.
this has been bothering me for some time as well. As snowflake has added support for qualify you can now create a dedupped table with a single statement without subselects:
CREATE TABLE fruit (id number, nam text);
insert into fruit values (1, 'Apple'), (1,'Apple'),
(2, 'Apple'), (3, 'Orange'), (3, 'Orange');
CREATE OR REPLACE TABLE fruit AS
SELECT * FROM
fruit
qualify row_number() OVER (PARTITION BY id, nam ORDER BY id, nam) = 1;
SELECT * FROM fruit;
Of course you are left with a new table and loose table history, primary keys, foreign keys and such.
Based on above ideas.....following query worked perfectly in my case.
CREATE OR REPLACE TABLE SCHEMA.table
AS
SELECT
DISTINCT *
FROM
SCHEMA.table
;
Your question boils down to: How can I delete one of two perfectly identical rows? . You can't. You can only do a DELETE FROM fruit where ID = 1 and Name = 'Apple';, then both rows will go away. Or you don't, and keep both.
For some databases, there are workarounds using internal rows, but there isn't any in snowflake, see https://support.snowflake.net/s/question/0D50Z00008FQyGqSAL/is-there-an-internalmetadata-unique-rowid-in-snowflake-that-i-can-reference . You cannot limit deletes, either, so your only option is to create a new table and swap.
Additional Note on Hans Henrik Eriksen's remark on the importance of update timestamps: This is a real help when the duplicates where added later. If, for example, you want to keep the newer values, you can then do this:
-- setup
create table fruit (ID Integer, Name VARCHAR(16777216), "UPDATED_AT" TIMESTAMP_NTZ);
insert into fruit values (1, 'Apple', CURRENT_TIMESTAMP::timestamp_ntz)
, (2, 'Apple', CURRENT_TIMESTAMP::timestamp_ntz)
, (3, 'Orange', CURRENT_TIMESTAMP::timestamp_ntz);
-- wait > 1 nanosecond
insert into fruit values (1, 'Apple', CURRENT_TIMESTAMP::timestamp_ntz)
, (3, 'Orange', CURRENT_TIMESTAMP::timestamp_ntz);
-- delete older duplicates (DESC)
DELETE FROM fruit
WHERE (ID
, UPDATED_AT) IN (
SELECT ID
, UPDATED_AT
FROM (
SELECT ID
, UPDATED_AT
, ROW_NUMBER() OVER (PARTITION BY ID ORDER BY UPDATED_AT DESC) AS rn
FROM fruit
)
WHERE rn > 1
);
simple UNION eliminate duplicates on use case of just all columns/no pks.
anyway problem should he solved as early on ingestion pipeline, and/or use scd etc.
Just a raw magic best way how to delete is wrong in principle, use scd with high resolution timestamp, solves any problem.
you want fix massive dups load ? then add column like batch id and remove all batch loaded records
Its like being healthy, you have 2 approaches:
eat a lot > get far > go-to a gym to burn it
eat well > have healthy life style and no need for gym.
So before discussing best gym, try change life style.
hope this helps, learn to do pressure upstream on data producers instead of living like jesus christ trying to clean up the mess of everyone.
The following solution is effective if you are looking at one or few columns as primary key references for the table.
-- Create a temp table to hold our duplicates (only second occurrence)
CREATE OR REPLACE TRANSIENT TABLE temp_table AS (
SELECT [col1], [col2], .. [coln]
FROM (
SELECT *, ROW_NUMBER () OVER(
PARTITION BY [pk]1, [pk]2, .. [pk]m
ORDER BY [pk]1, [pk]2, .. [pk]m) AS duplicate_count
FROM [schema].[table]
) WHERE duplicate_count = 2
);
-- Delete all the duplicate records from the table
DELETE FROM [schema].[table] t1
USING temp_table t2
WHERE
t1.[pk]1 = t2.[pk]1 AND
t1.[pk]2 = t2.[pk]2 AND
..
t1.[pk]n = t2.[pk]m;
-- Insert single copy using the temp_table in the original table
INSERT INTO [schema].[table]
SELECT *
FROM temp_table;
This is inspired by #Felipe Hoffa's answer:
##create table with dupes and take the max id
create or replace transient table duplicate_holder as (
select max(S.ID) ID, some_field, count(some_field) numberAssets
from some_table S
group by some_field
having count(some_field)>1
)
##join back to the original table on the field excluding the ID in the duplicate table and delete.
delete from some_table as t
USING duplicate_holder as d
WHERE t.some_field=d.some_field
and t.id <> d.id
Not sure if people are still interested in this but I've used the below query which is more elegant and seems to have worked
create or replace table {{your_table}} as
select * from {{your_table}}
qualify row_number() over (partition by {{criteria_columns}} order by 1) = 1

Splitting multiple fields by delimiter

I have to write an SP that can perform Partial Updates on our databases, the changes are stored in a record of the PU table. A values fields contains all values, delimited by a fixed delimiter. A tables field refers to a Schemes table containing the column names for each table in a similar fashion in a Colums fiels.
Now for my SP I need to split the Values field and Columns field in a temp table with Column/Value pairs, this happens for each record in the PU table.
An example:
Our PU table looks something like this:
CREATE TABLE [dbo].[PU](
[Table] [nvarchar](50) NOT NULL,
[Values] [nvarchar](max) NOT NULL
)
Insert SQL for this example:
INSERT INTO [dbo].[PU]([Table],[Values]) VALUES ('Person','John Doe;26');
INSERT INTO [dbo].[PU]([Table],[Values]) VALUES ('Person','Jane Doe;22');
INSERT INTO [dbo].[PU]([Table],[Values]) VALUES ('Person','Mike Johnson;20');
INSERT INTO [dbo].[PU]([Table],[Values]) VALUES ('Person','Mary Jane;24');
INSERT INTO [dbo].[PU]([Table],[Values]) VALUES ('Course','Mathematics');
INSERT INTO [dbo].[PU]([Table],[Values]) VALUES ('Course','English');
INSERT INTO [dbo].[PU]([Table],[Values]) VALUES ('Course','Geography');
INSERT INTO [dbo].[PU]([Table],[Values]) VALUES ('Campus','Campus A;Schools Road 1;Educationville');
INSERT INTO [dbo].[PU]([Table],[Values]) VALUES ('Campus','Campus B;Schools Road 31;Educationville');
INSERT INTO [dbo].[PU]([Table],[Values]) VALUES ('Campus','Campus C;Schools Road 22;Educationville');
And we have a Schemes table similar to this:
CREATE TABLE [dbo].[Schemes](
[Table] [nvarchar](50) NOT NULL,
[Columns] [nvarchar](max) NOT NULL
)
Insert SQL for this example:
INSERT INTO [dbo].[Schemes]([Table],[Columns]) VALUES ('Person','[Name];[Age]');
INSERT INTO [dbo].[Schemes]([Table],[Columns]) VALUES ('Course','[Name]');
INSERT INTO [dbo].[Schemes]([Table],[Columns]) VALUES ('Campus','[Name];[Address];[City]');
As a result the first record of the PU table should result in a temp table like:
The 5th will have:
Finally, the 8th PU record should result in:
You get the idea.
I tried use the following query to create the temp tables, but alas it fails when there's more that one value in the PU record:
DECLARE #Fields TABLE
(
[Column] INT,
[Value] VARCHAR(MAX)
)
INSERT INTO #Fields
SELECT TOP 1
(SELECT Value FROM STRING_SPLIT([dbo].[Schemes].[Columns], ';')),
(SELECT Value FROM STRING_SPLIT([dbo].[PU].[Values], ';'))
FROM [dbo].[PU] INNER JOIN [dbo].[Schemes] ON [dbo].[PU].[Table] = [dbo].[Schemes].[Table]
TOP 1 correctly gets the first PU record as each PU record is removed once processed.
The error is:
Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
In the case of a Person record, the splits are indeed returning 2 values/colums at a time, I just want to store the values in 2 records instead of getting an error.
Any help on rewriting the above query?
Also do note that the data is just generic nonsense. Being able to have 2 fields that both have delimited values, always equal in amount (e.g. a 'person' in the PU table will always have 2 delimited values in the field), and break them up in several column/header rows is the point of the question.
UPDATE: Working implementation
Based on the (accepted) answer of Sean Lange, I was able to work out followin implementation to overcome the issue:
As I need to reuse it, the combine column/value functionality is performed by a new function, declared as such:
CREATE FUNCTION [dbo].[JoinDelimitedColumnValue]
(#splitValues VARCHAR(8000), #splitColumns VARCHAR(8000),#pDelimiter CHAR(1))
RETURNS TABLE WITH SCHEMABINDING AS
RETURN
WITH MyValues AS
(
SELECT ColumnPosition = x.ItemNumber,
ColumnValue = x.Item
FROM dbo.DelimitedSplit8K(#splitValues, #pDelimiter) x
)
, ColumnData AS
(
SELECT ColumnPosition = x.ItemNumber,
ColumnName = x.Item
FROM dbo.DelimitedSplit8K(#splitColumns, #pDelimiter) x
)
SELECT cd.ColumnName,
v.ColumnValue
FROM MyValues v
JOIN ColumnData cd ON cd.ColumnPosition = v.ColumnPosition
;
In case of the above sample data, I'd call this function with the following SQL:
DECLARE #FieldValues VARCHAR(8000), #FieldColumns VARCHAR(8000)
SELECT TOP 1 #FieldValues=[dbo].[PU].[Values], #FieldColumns=[dbo].[Schemes].[Columns] FROM [dbo].[PU] INNER JOIN [dbo].[Schemes] ON [dbo].[PU].[Table] = [dbo].[Schemes].[Table]
INSERT INTO #Fields
SELECT [Column] = x.[ColumnName],[Value] = x.[ColumnValue] FROM [dbo].[JoinDelimitedColumnValue](#FieldValues, #FieldColumns, #Delimiter) x
This data structure makes this way more complicated than it should be. You can leverage the splitter from Jeff Moden here. http://www.sqlservercentral.com/articles/Tally+Table/72993/ The main difference of that splitter and all the others is that his returns the ordinal position of each element. Why all the other splitters don't do this is beyond me. For things like this it is needed. You have two sets of delimited data and you must ensure that they are both reassembled in the correct order.
The biggest issue I see is that you don't have anything in your main table to function as an anchor for ordering the results correctly. You need something, even an identity to ensure the output rows stay "together". To accomplish I just added an identity to the PU table.
alter table PU add RowOrder int identity not null
Now that we have an anchor this is still a little cumbersome for what should be a simple query but it is achievable.
Something like this will now work.
with MyValues as
(
select p.[Table]
, ColumnPosition = x.ItemNumber
, ColumnValue = x.Item
, RowOrder
from PU p
cross apply dbo.DelimitedSplit8K(p.[Values], ';') x
)
, ColumnData as
(
select ColumnName = replace(replace(x.Item, ']', ''), '[', '')
, ColumnPosition = x.ItemNumber
, s.[Table]
from Schemes s
cross apply dbo.DelimitedSplit8K(s.Columns, ';') x
)
select cd.[Table]
, v.ColumnValue
, cd.ColumnName
from MyValues v
join ColumnData cd on cd.[Table] = v.[Table]
and cd.ColumnPosition = v.ColumnPosition
order by v.RowOrder
, v.ColumnPosition
I recommended not storing values like this in the first place. I recommend having a key value in the tables and preferably not using Table and Columns as a composite key. I recommend to avoid using reserved words. I also don't know what version of SQL you are using. I am going to assume you are using a fairly recent version of Microsoft SQL Server that will support my provided stored procedure.
Here is an overview of the solution:
1) You need to convert both the PU and the Schema table into a table where you will have each "column" value in the list of columns isolated in their own row. If you can store the data in this format rather than the provided format, you will be a little better off.
What I mean is
Table|Columns
Person|Jane Doe;22
needs converted to
Table|Column|OrderInList
Person|Jane Doe|1
Person|22|2
There are multiple ways to do this, but I prefer an xml trick that I picked up. You can find multiple split string examples online so I will not focus on that. Use whatever gives you the best performance. Unfortunately, You might not be able to get away from this table-valued function.
Update:
Thanks to Shnugo's performance enhancement comment, I have updated my xml splitter to give you the row number which reduces some of my code. I do the exact same thing to the Schema list.
2) Since the new Schema table and the new PU table now have the order each column appears, the PU table and the schema table can be joined on the "Table" and the OrderInList
CREATE FUNCTION [dbo].[fnSplitStrings_XML]
(
#List NVARCHAR(MAX),
#Delimiter VARCHAR(255)
)
RETURNS TABLE
AS
RETURN
(
SELECT y.i.value('(./text())[1]', 'nvarchar(4000)') AS Item,ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) as RowNumber
FROM
(
SELECT CONVERT(XML, '<i>'
+ REPLACE(#List, #Delimiter, '</i><i>')
+ '</i>').query('.') AS x
) AS a CROSS APPLY x.nodes('i') AS y(i)
);
GO
CREATE Procedure uspGetColumnValues
as
Begin
--Split each value in PU
select p.[Table],p.[Values],a.[Item],CHARINDEX(a.Item,p.[Values]) as LocationInStringForSorting,a.RowNumber
into #PuWithOrder
from PU p
cross apply [fnSplitStrings_XML](p.[Values],';') a --use whatever string split function is working best for you (performance wise)
--Split each value in Schema
select s.[Table],s.[Columns],a.[Item],CHARINDEX(a.Item,s.[Columns]) as LocationInStringForSorting,a.RowNumber
into #SchemaWithOrder
from Schemes s
cross apply [fnSplitStrings_XML](s.[Columns],';') a --use whatever string split function is working best for you (performance wise)
DECLARE #Fields TABLE --If this is an ETL process, maybe make this a permanent table with an auto incrementing Id and reference this table in all steps after this.
(
[Table] NVARCHAR(50),
[Columns] NVARCHAR(MAX),
[Column] VARCHAR(MAX),
[Value] VARCHAR(MAX),
OrderInList int
)
INSERT INTO #Fields([Table],[Columns],[Column],[Value],OrderInList)
Select pu.[Table],pu.[Values] as [Columns],s.Item as [Column],pu.Item as [Value],pu.RowNumber
from #PuWithOrder pu
join #SchemaWithOrder s on pu.[Table]=s.[Table] and pu.RowNumber=s.RowNumber
Select [Table],[Columns],[Column],[Value],OrderInList
from #Fields
order by [Table],[Columns],OrderInList
END
GO
EXEC uspGetColumnValues
GO
Update:
Since your working implementation is a table-valued function, I have another recommendation. The problem I see is that your using a table valued function which ultimately handles one record at a time. You are going to have better performance with set based operations and batching as needed. With a tabled valued function, you are likely going to be looping through each row. If this is some sort of ETL process, your team will be better off if you have a stored procedure that processes the rows in bulk. It might make sense to stage the results into a better table that your team can work with down stream rather than have them use a potentially slow table-valued function.

Resources