UNDOCUMENTED FEATURE when SELECT in VARCHAR with trailing whitespace SQL Server - sql-server

I hope this is an interesting puzzle for an SQL expert out there.
When I run the following query, I would expect it to return no results.
-- Create a table variable Note: This same behaviour occurs in standard tables.
DECLARE #TestResults TABLE (Id int IDENTITY(1,1) NOT NULL, Foo VARCHAR(100) NOT NULL, About VARCHAR(1000) NOT NULL)
-- Add some test data Note: Without space, space prefix and space suffix
INSERT INTO #TestResults(Foo, About) VALUES('Bar', 'No spaces')
INSERT INTO #TestResults(Foo, About) VALUES('Bar ', 'Space Suffix')
INSERT INTO #TestResults(Foo, About) VALUES(' Bar', 'Space prefix')
-- SELECT statement that is filtered by a value without a space and also a value with a space suffix
SELECT
t.Foo
, t.About
FROM #TestResults t
WHERE t.Foo like 'Bar '
AND t.Foo like 'Bar'
AND t.Foo = 'Bar '
AND t.Foo = 'Bar'
The results return a single row:
[Foo] [About]
Bar Space Suffix
I need to know more about this behaviour and how I should work around it.
It is also worth noting that LEN(Foo) is odd too, as follows:
DECLARE #TestResults TABLE (Id int IDENTITY(1,1) NOT NULL, Foo VARCHAR(100) NOT NULL, About VARCHAR(1000) NOT NULL)
INSERT INTO #TestResults(Foo, About) VALUES('Bar', 'No spaces')
INSERT INTO #TestResults(Foo, About) VALUES('Bar ', 'Space Suffix')
INSERT INTO #TestResults(Foo, About) VALUES(' Bar', 'Space prefix')
SELECT
t.Foo
, LEN(Foo) [Length]
, t.About
FROM #TestResults t
Gives the following results:
[Foo] [Length] [About]
Bar 3 No spaces
Bar 3 Space Suffix
Bar 4 Space prefix
Without any lateral thinking, what do I need to change my WHERE clause to in order to return 0 results as expected?

The answer is to add the following clause:
AND DATALENGTH(t.Foo) = DATALENGTH('Bar')
Running the following query...
DECLARE #Chars TABLE (CharNumber INT NOT NULL)
DECLARE #CharNumber INT = 0
WHILE(#CharNumber <= 255)
BEGIN
INSERT INTO #Chars(CharNumber) VALUES(#CharNumber)
SET #CharNumber = #CharNumber + 1
END
SELECT
CharNumber
, IIF('Test' = 'Test' + CHAR(CharNumber),1,0) ['Test' = 'Test' + CHAR(CharNumber)]
, IIF('Test' LIKE 'Test' + CHAR(CharNumber),1,0) ['Test' LIKE 'Test' + CHAR(CharNumber)]
, IIF(LEN('Test') = LEN('Test' + CHAR(CharNumber)),1,0) [LEN('Test') = LEN('Test' + CHAR(CharNumber))]
, IIF(DATALENGTH('Test') = DATALENGTH('Test' + CHAR(CharNumber)),1,0) [DATALENGTH('Test') = DATALENGTH('Test' + CHAR(CharNumber))]
FROM #Chars
WHERE ('Test' = 'Test' + CHAR(CharNumber))
OR ('Test' LIKE 'Test' + CHAR(CharNumber))
OR (LEN('Test') = LEN('Test' + CHAR(CharNumber)))
ORDER BY CharNumber
...produces the following results...
CharNumber 'Test' = 'Test' + CHAR(CharNumber) 'Test' LIKE 'Test' + CHAR(CharNumber) LEN('Test') = LEN('Test' + CHAR(CharNumber)) DATALENGTH('Test') = DATALENGTH('Test' + CHAR(CharNumber))
0 1 1 0 0
32 1 0 1 0
37 0 1 0 0
DATALENGTH can be used to test the equality of two VARCHAR, therefore the original query can be corrected as follows:
-- Create a table variable Note: This same behaviour occurs in standard tables.
DECLARE #TestResults TABLE (Id int IDENTITY(1,1) NOT NULL, Foo VARCHAR(100) NOT NULL, About VARCHAR(1000) NOT NULL)
-- Add some test data Note: Without space, space prefix and space suffix
INSERT INTO #TestResults(Foo, About) VALUES('Bar', 'No spaces')
INSERT INTO #TestResults(Foo, About) VALUES('Bar ', 'Space Suffix')
INSERT INTO #TestResults(Foo, About) VALUES(' Bar', 'Space prefix')
-- SELECT statement that is filtered by a value without a space and also a value with a space suffix
SELECT
t.Foo
, t.About
FROM #TestResults t
WHERE t.Foo like 'Bar '
AND t.Foo like 'Bar'
AND t.Foo = 'Bar '
AND t.Foo = 'Bar'
AND DATALENGTH(t.Foo) = DATALENGTH('Bar') -- Additional clause
I also made a function to be used instead of =
ALTER FUNCTION dbo.fVEQ( #VarCharA VARCHAR(MAX), #VarCharB VARCHAR(MAX) )
RETURNS BIT
WITH SCHEMABINDING
AS
BEGIN
-- Added by WonderWorker on 18th March 2020
DECLARE #Result BIT = IIF(
(#VarCharA = #VarCharB AND DATALENGTH(#VarCharA) = DATALENGTH(#VarCharB))
, 1, 0)
RETURN #Result
END
..Here is a test for all 256 characters used as trailing characters to prove that it works..
-- Test fVEQ with all 256 characters
DECLARE #Chars TABLE (CharNumber INT NOT NULL)
DECLARE #CharNumber INT = 0
WHILE(#CharNumber <= 255)
BEGIN
INSERT INTO #Chars(CharNumber) VALUES(#CharNumber)
SET #CharNumber = #CharNumber + 1
END
SELECT
CharNumber
, dbo.fVEQ('Bar','Bar' + CHAR(CharNumber)) [fVEQ Trailing Char Test]
, dbo.fVEQ('Bar','Bar') [fVEQ Same test]
, dbo.fVEQ('Bar',CHAR(CharNumber) + 'Bar') [fVEQ Leading Char Test]
FROM #Chars
WHERE (dbo.fVEQ('Bar','Bar' + CHAR(CharNumber)) = 1)
AND (dbo.fVEQ('Bar','Bar') = 0)
AND (dbo.fVEQ('Bar',CHAR(CharNumber) + 'Bar') = 1)

The reason why trailing whitespace is disregarded in string comparison, is because of the notion of fixed-length string fields, in which any content shorter than the fixed length is automatically right-padded with spaces. Such fixed-length fields cannot distinguish meaningful trailing spaces from padding.
The rationale for why fixed-length string fields even exist, is that they improve performance significantly in many cases, and when SQL was designed it was common for character-based terminals (which usually treated trailing spaces equivalent to padding), reports printed with monospaced fonts (which used trailing spaces for padding and alignment), and data storage and exchange formats (which used fixed-length fields in place of extensive and costly delimiters and complicated parsing logic), to all be oriented around fixed-length fields, so there was a tight integration with this concept at all stages of processing.
When comparing two fixed-length fields of the same fixed length, a literal comparison would of course be possible and would produce correct results.
But when comparing a fixed-length field of a given fixed length, to a fixed-length field of a different fixed length, the desired behaviour would never be to include the trailing spaces in the comparison, since two such fields could never match literally simply by virtue of their differing fixed lengths. The shorter field could be cast and padded to the length of the longer (at least conceptually if not physically), but the trailing space would still then be considered as padding rather than as meaningful.
When comparing a fixed-length field to a variable-length field, the desired behaviour is also probably never to include trailing spaces in the comparison. More complicated approaches which attempt to attribute meaning to trailing spaces in the variable-length side of the comparison, would only come at the cost of slower comparison logic and additional conceptual complexity and potential for error.
In terms of why variable-length to variable-length comparisons ignore trailing spaces, since here spaces can be meaningful in principle, the rationale is probably maintaining consistency in comparison behaviour as when fixed-length fields are involved, and the avoidance of the most common kind of error, since trailing spaces are spurious in databases far more often than they are meaningful.
Nowadays, a database system designed in every respect from scratch would probably forsake fixed-length fields, and probably perform all comparisons literally, leaving the developer to deal explicitly with spurious trailing spaces, but in my experience this would result in extra development effort and far more frequent error than the current arrangement in SQL, where errors in program logic involving the silent disregard of trailing spaces usually only occurs when designing complex string-shredding logic to be used against un-normalised data (which is a kind of data that SQL is specifically not optimised for handling).
So to be clear, this is not an undocumented feature, but a prominent feature that exists by design.

If you change the query to
SELECT
Foo
, About
, CASE WHEN Foo LIKE 'Bar ' THEN 'T' ELSE 'F' END As Like_Bar_Space
, CASE WHEN Foo LIKE 'Bar' THEN 'T' ELSE 'F' END As Like_Bar
, CASE WHEN Foo = 'Bar ' THEN 'T' ELSE 'F' END As EQ_Bar_Space
, CASE WHEN Foo = 'Bar' THEN 'T' ELSE 'F' END As EQ_Bar
FROM #TestResults
it gives you a better overview, as you see the result of the different conditions separately:
Foo About Like_Bar_Space Like_Bar EQ_Bar_Space EQ_Bar
------ ------------ --------------- --------- ------------- ------
Bar No spaces F T T T
Bar Space Suffix T T T T
Bar Space prefix F F F F
It looks like equals = ignores trailing spaces in both searched string and pattern. LIKE, however, does not ignore the trailing space in the pattern but ignores an extra trailing space in the searched string. Leading spaces are never ignored.
I don't know how wrong entries got in there, but you can fix them with
UPDATE #TestResults SET Foo = TRIM(Foo)
You can make a trailing space sensitive test with:
WHERE t.Foo + ";" = pattern + ";"
You can make a trailing space insensitive test with:
WHERE RTRIM(t.Foo) = RTRIM(pattern)

Related

Is there a way to find values that contain only 0's and a symbol of any length?

I want to find strings of any length that contain only 0's and a symbol such as a / a . or a -
Examples include 0__0 and 000/00/00000 and .00000
Considering this sample data:
CREATE TABLE dbo.things(thing varchar(255));
INSERT dbo.things(thing) VALUES
('0__0'),('000/00/00000'),('00000'),('0123456');
Try the following, which locates the first position of any character that is NOT a 0, a decimal, a forward slash, or an underscore. PATINDEX returns 0 if the pattern is not found.
SELECT thing FROM dbo.things
WHERE PATINDEX('%[^0^.^/^_]%', thing) = 0;
Results:
thing
0__0
000/00/00000
00000
The opposite:
SELECT thing FROM dbo.things
WHERE PATINDEX('%[^0^.^/^_]%', thing) > 0;
Results:
thing
0123food456
Example db<>fiddle
I can see a way of doing this... But it's something that wouldn't perform well, if you think about using it as a search criteria.
We are going to use a translate function on SQL Server, to replace the allowed characters, or symbols as you've said, with a zero. And then, eliminates the zeroes. If the result is an empty string, then there are two cases, or it only had zeroes and allowed characters, or it already was an empty string.
So, checking for this and for non-empty strings, we can define if it matches your criteria.
-- Test scenario
create table #example (something varchar(200) )
insert into #example(something) values
--Example cases from Stack Overflow
('0__0'),('000/00/00000'),('.00000'),
-- With something not allowed (don't know, just put a number)
('1230__0'),('000/04560/00000'),('.00000789'),
-- Just not allowed characters, zero, blank, and NULL
('1234567489'),('0'), (''),(null)
-- Shows the data, with a column to check if it matches your criteria
select *
from #example e
cross apply (
select case when
-- If it *must* have at least a zero
e.something like '%0%' and
-- Eliminates zeroes
replace(
-- Replaces the allowed characters with zero
translate(
e.something
,'_./'
,'000'
)
,'0'
,''
) = ''
then cast(1 as bit)
else cast(0 as bit)
end as doesItMatch
) as criteria(doesItMatch)
I really discourage you from using this as a search criteria.
-- Queries the table over this criteria.
-- This is going to compute over your entire table, so it can get very CPU intensive
select *
from #example e
where
-- If it *must* have at least a zero
e.something like '%0%' and
-- Eliminates zeroes
replace(
-- Replaces the allowed characters with zero
translate(
e.something
,'_./'
,'000'
)
,'0'
,''
) = ''
If you must use this as a search criteria, and this will be a common filter on your application, I suggest you create a new bit column, to flag if it matches this, and index it. Thus, the increase in computational effort would be spread on the inserts/updates/deletes, and the search queries won't overloading the database.
The code can be seen executing here, on DB Fiddle.
What I got from the question is that the strings must contain both 0 and any combination of the special characters in the string.
If you have SQL Server 2017 and above, you can use translate() to replace multiple characters with a space and compare this with the empty string. Also you can use LIKE to enforce that both a 0 and any combination of the special character(s) appear at least once:
DECLARE #temp TABLE (val varchar(100))
INSERT INTO #temp VALUES
('0__0'), ('000/00/00000'), ('.00000'), ('w0hee/'), ('./')
SELECT *
FROM #temp
WHERE val LIKE '%0%' --must have at least one zero somewhere
AND val LIKE '%[_/.]%' --must have at least one special character(s) somewhere
AND TRANSLATE(val, '0./_', ' ') = '' --translated zeros and sp characters to spaces equivalent to an empty string
Creates output:
val
0__0
000/00/00000
.00000

Why does LEN( char(32) ) = 0 in T-SQL?

I wanted to write a function to count the number of delimiters or any substring (which could be a space) in a string of text, throwing a hack error if the delimiter was null or empty:
if len(#lookfor)=0 or #lookfor is null return Cast('substring must not be null or empty' as int)
But if the function is called with #lookfor = ' ' that trips the error.
I am aware of DATALENGTH(). Just curious why a single space is treated as "trailing" if there's nothing before it.
I am aware of DATALENGTH(). Just curious why a single space is treated
as "trailing" if there's nothing before it.
It's trailing because it's at the end of the string. It's also leading since it's the at the beginning.
But if the function is called with #lookfor = '' that trips the error
Something that messes a lot of people up with SQL is how '' = ' '; Note this query:
DECLARE #blank VARCHAR(10) = '', #space VARCHAR(10) = CHAR(32);
SELECT CASE WHEN #blank = #space THEN 'That the...!?!?' END;
You can change #space to CHAR(32)+CHAR(32)+.... and #space and #blank will still be equal.
Complicating things a little more note that the DATALENGTH for a blank/empty value is 0 when it's a VARCHAR(N) but the DATALENGTH is N when for CHAR(N) values. In other words,
SELECT DATALENGTH(CAST('' AS CHAR(1))) returns 1 and SELECT DATALENGTH(CAST('' AS CHAR(10))) returns 10.
That means that if your delimiter variable is say, CHAR(1) - that will mess you up. Here's the function for you:
CREATE FUNCTION dbo.CountDelimiters(#string VARCHAR(8000), #delimiter VARCHAR(1))
RETURNS TABLE WITH SCHEMABINDING AS RETURN
SELECT DCount = MAX(DATALENGTH(#string)-LEN(REPLACE(#string,#delimiter,'')))
WHERE DATALENGTH(#delimiter) > 0;
Note that #delimter is VARCHAR(1) and NOT a CHAR datatype.
The formula to count delimiters in #string is:
DATALENGTH(#string)-LEN(REPLACE(#string,#delimiter,''))
or
(DATALENGTH(#string)-LEN(REPLACE(#string,#delimiter,'')))/DATALENGTH(#delimiter) when dealing with delimiters longer than 1`.
WHERE DATALENGTH(#delimiter) > 0 will force the function to ignore a NULL or blank value. This is known as a Startup Predicate.
Putting a MAX around DATALENGTH(#string)-LEN(REPLACE(#string,#delimiter,'')) forces the function to rerturn a NULL value in the event you pass it a blank or NULL value.
This will return 10 for the number of spaces in my string:
SELECT f.DCount FROM dbo.CountDelimiters('one space two spaces three ', CHAR(32)) AS f;
Against a table you would use the function like this (note that I'm counting the number of times the letter "A" appears:
-- Sample Strings
DECLARE #table TABLE (SomeText VARCHAR(36));
INSERT #table VALUES('ABCABC'),('XXX'),('AAA'),(''),(NULL);
SELECT t.SomeText, f.DCount
FROM #table AS t
CROSS APPLY dbo.CountDelimiters(t.SomeText, 'A') AS f;
Which returns:
SomeText DCount
------------------------------------ -----------
ABCABC 2
XXX 0
AAA 3
0
NULL NULL
If a string has a chacacter at the end, it is considered trailing, even if there are no other characters before it. Same for logic regarding leading characters.
So ' ' can be considered an empty string ('') having a trailing space.
When I started using SQL, I also noticed the behavior that the LEN function ignores trailing spaces. And I think (but I am not sure) that is has to do with the fact that LEN should probably also behave "correctly" when used with CHAR/NCHAR values. Unlike VARCHAR/NVARCHAR, the CHAR/NCHAR values have a fixed width and will be filled with trailing spaces automatically. So when you put value 'abc' in a field/variable of type CHAR(5), the value will become 'abc ', but the LEN function will still "correctly" return 3 in that case.
I consider this just to be a strange quirk of SQL.
Remark:
The DATALENGTH function will not ignore trailing spaces in VARCHAR/NVARCHAR values. But note that DATALENGTH will return the size in bytes of the field's value. So if you use unicode data (NCHAR/NVARCHAR), the DATALENGTH function will return 6 for value N'abc', because each unicode character in SQL Server uses 2 bytes!

Dynamic SQL operator IN with multiple parameters [duplicate]

How do I parameterize a query containing an IN clause with a variable number of arguments, like this one?
SELECT * FROM Tags
WHERE Name IN ('ruby','rails','scruffy','rubyonrails')
ORDER BY Count DESC
In this query, the number of arguments could be anywhere from 1 to 5.
I would prefer not to use a dedicated stored procedure for this (or XML), but if there is some elegant way specific to SQL Server 2008, I am open to that.
You can parameterize each value, so something like:
string[] tags = new string[] { "ruby", "rails", "scruffy", "rubyonrails" };
string cmdText = "SELECT * FROM Tags WHERE Name IN ({0})";
string[] paramNames = tags.Select(
(s, i) => "#tag" + i.ToString()
).ToArray();
string inClause = string.Join(", ", paramNames);
using (SqlCommand cmd = new SqlCommand(string.Format(cmdText, inClause))) {
for(int i = 0; i < paramNames.Length; i++) {
cmd.Parameters.AddWithValue(paramNames[i], tags[i]);
}
}
Which will give you:
cmd.CommandText = "SELECT * FROM Tags WHERE Name IN (#tag0, #tag1, #tag2, #tag3)"
cmd.Parameters["#tag0"] = "ruby"
cmd.Parameters["#tag1"] = "rails"
cmd.Parameters["#tag2"] = "scruffy"
cmd.Parameters["#tag3"] = "rubyonrails"
No, this is not open to SQL injection. The only injected text into CommandText is not based on user input. It's solely based on the hardcoded "#tag" prefix, and the index of an array. The index will always be an integer, is not user generated, and is safe.
The user inputted values are still stuffed into parameters, so there is no vulnerability there.
Edit:
Injection concerns aside, take care to note that constructing the command text to accomodate a variable number of parameters (as above) impede's SQL server's ability to take advantage of cached queries. The net result is that you almost certainly lose the value of using parameters in the first place (as opposed to merely inserting the predicate strings into the SQL itself).
Not that cached query plans aren't valuable, but IMO this query isn't nearly complicated enough to see much benefit from it. While the compilation costs may approach (or even exceed) the execution costs, you're still talking milliseconds.
If you have enough RAM, I'd expect SQL Server would probably cache a plan for the common counts of parameters as well. I suppose you could always add five parameters, and let the unspecified tags be NULL - the query plan should be the same, but it seems pretty ugly to me and I'm not sure that it'd worth the micro-optimization (although, on Stack Overflow - it may very well be worth it).
Also, SQL Server 7 and later will auto-parameterize queries, so using parameters isn't really necessary from a performance standpoint - it is, however, critical from a security standpoint - especially with user inputted data like this.
Here's a quick-and-dirty technique I have used:
SELECT * FROM Tags
WHERE '|ruby|rails|scruffy|rubyonrails|'
LIKE '%|' + Name + '|%'
So here's the C# code:
string[] tags = new string[] { "ruby", "rails", "scruffy", "rubyonrails" };
const string cmdText = "select * from tags where '|' + #tags + '|' like '%|' + Name + '|%'";
using (SqlCommand cmd = new SqlCommand(cmdText)) {
cmd.Parameters.AddWithValue("#tags", string.Join("|", tags);
}
Two caveats:
The performance is terrible. LIKE "%...%" queries are not indexed.
Make sure you don't have any |, blank, or null tags or this won't work
There are other ways to accomplish this that some people may consider cleaner, so please keep reading.
For SQL Server 2008, you can use a table valued parameter. It's a bit of work, but it is arguably cleaner than my other method.
First, you have to create a type
CREATE TYPE dbo.TagNamesTableType AS TABLE ( Name nvarchar(50) )
Then, your ADO.NET code looks like this:
string[] tags = new string[] { "ruby", "rails", "scruffy", "rubyonrails" };
cmd.CommandText = "SELECT Tags.* FROM Tags JOIN #tagNames as P ON Tags.Name = P.Name";
// value must be IEnumerable<SqlDataRecord>
cmd.Parameters.AddWithValue("#tagNames", tags.AsSqlDataRecord("Name")).SqlDbType = SqlDbType.Structured;
cmd.Parameters["#tagNames"].TypeName = "dbo.TagNamesTableType";
// Extension method for converting IEnumerable<string> to IEnumerable<SqlDataRecord>
public static IEnumerable<SqlDataRecord> AsSqlDataRecord(this IEnumerable<string> values, string columnName) {
if (values == null || !values.Any()) return null; // Annoying, but SqlClient wants null instead of 0 rows
var firstRecord = values.First();
var metadata= new SqlMetaData(columnName, SqlDbType.NVarChar, 50); //50 as per SQL Type
return values.Select(v =>
{
var r = new SqlDataRecord(metadata);
r.SetValues(v);
return r;
});
}
Update
As Per #Doug
Please try to avoid var metadata = SqlMetaData.InferFromValue(firstRecord, columnName);
It's set first value length, so if first value is 3 characters then its set max length 3 and other records will truncated if more then 3 characters.
So, please try to use: var metadata= new SqlMetaData(columnName, SqlDbType.NVarChar, maxLen);
Note: -1 for max length.
The original question was "How do I parameterize a query ..."
This is not an answer to that original question. There are some very good demonstrations of how to do that, in other answers.
See the first answer from Mark Brackett (the first answer starting "You can parameterize each value") and Mark Brackett's second answer for the preferred answer that I (and 231 others) upvoted. The approach given in his answer allows 1) for effective use of bind variables, and 2) for predicates that are sargable.
Selected answer
I am addressing here the approach given in Joel Spolsky's answer, the answer "selected" as the right answer.
Joel Spolsky's approach is clever. And it works reasonably, it's going to exhibit predictable behavior and predictable performance, given "normal" values, and with the normative edge cases, such as NULL and the empty string. And it may be sufficient for a particular application.
But in terms generalizing this approach, let's also consider the more obscure corner cases, like when the Name column contains a wildcard character (as recognized by the LIKE predicate.) The wildcard character I see most commonly used is % (a percent sign.). So let's deal with that here now, and later go on to other cases.
Some problems with % character
Consider a Name value of 'pe%ter'. (For the examples here, I use a literal string value in place of the column name.) A row with a Name value of `'pe%ter' would be returned by a query of the form:
select ...
where '|peanut|butter|' like '%|' + 'pe%ter' + '|%'
But that same row will not be returned if the order of the search terms is reversed:
select ...
where '|butter|peanut|' like '%|' + 'pe%ter' + '|%'
The behavior we observe is kind of odd. Changing the order of the search terms in the list changes the result set.
It almost goes without saying that we might not want pe%ter to match peanut butter, no matter how much he likes it.
Obscure corner case
(Yes, I will agree that this is an obscure case. Probably one that is not likely to be tested. We wouldn't expect a wildcard in a column value. We may assume that the application prevents such a value from being stored. But in my experience, I've rarely seen a database constraint that specifically disallowed characters or patterns that would be considered wildcards on the right side of a LIKE comparison operator.
Patching a hole
One approach to patching this hole is to escape the % wildcard character. (For anyone not familiar with the escape clause on the operator, here's a link to the SQL Server documentation.
select ...
where '|peanut|butter|'
like '%|' + 'pe\%ter' + '|%' escape '\'
Now we can match the literal %. Of course, when we have a column name, we're going to need to dynamically escape the wildcard. We can use the REPLACE function to find occurrences of the % character and insert a backslash character in front of each one, like this:
select ...
where '|pe%ter|'
like '%|' + REPLACE( 'pe%ter' ,'%','\%') + '|%' escape '\'
So that solves the problem with the % wildcard. Almost.
Escape the escape
We recognize that our solution has introduced another problem. The escape character. We see that we're also going to need to escape any occurrences of escape character itself. This time, we use the ! as the escape character:
select ...
where '|pe%t!r|'
like '%|' + REPLACE(REPLACE( 'pe%t!r' ,'!','!!'),'%','!%') + '|%' escape '!'
The underscore too
Now that we're on a roll, we can add another REPLACE handle the underscore wildcard. And just for fun, this time, we'll use $ as the escape character.
select ...
where '|p_%t!r|'
like '%|' + REPLACE(REPLACE(REPLACE( 'p_%t!r' ,'$','$$'),'%','$%'),'_','$_') + '|%' escape '$'
I prefer this approach to escaping because it works in Oracle and MySQL as well as SQL Server. (I usually use the \ backslash as the escape character, since that's the character we use in regular expressions. But why be constrained by convention!
Those pesky brackets
SQL Server also allows for wildcard characters to be treated as literals by enclosing them in brackets []. So we're not done fixing yet, at least for SQL Server. Since pairs of brackets have special meaning, we'll need to escape those as well. If we manage to properly escape the brackets, then at least we won't have to bother with the hyphen - and the carat ^ within the brackets. And we can leave any % and _ characters inside the brackets escaped, since we'll have basically disabled the special meaning of the brackets.
Finding matching pairs of brackets shouldn't be that hard. It's a little more difficult than handling the occurrences of singleton % and _. (Note that it's not sufficient to just escape all occurrences of brackets, because a singleton bracket is considered to be a literal, and doesn't need to be escaped. The logic is getting a little fuzzier than I can handle without running more test cases.)
Inline expression gets messy
That inline expression in the SQL is getting longer and uglier. We can probably make it work, but heaven help the poor soul that comes behind and has to decipher it. As much of a fan I am for inline expressions, I'm inclined not use one here, mainly because I don't want to have to leave a comment explaining the reason for the mess, and apologizing for it.
A function where ?
Okay, so, if we don't handle that as an inline expression in the SQL, the closest alternative we have is a user defined function. And we know that won't speed things up any (unless we can define an index on it, like we could with Oracle.) If we've got to create a function, we might better do that in the code that calls the SQL statement.
And that function may have some differences in behavior, dependent on the DBMS and version. (A shout out to all you Java developers so keen on being able to use any database engine interchangeably.)
Domain knowledge
We may have specialized knowledge of the domain for the column, (that is, the set of allowable values enforced for the column. We may know a priori that the values stored in the column will never contain a percent sign, an underscore, or bracket pairs. In that case, we just include a quick comment that those cases are covered.
The values stored in the column may allow for % or _ characters, but a constraint may require those values to be escaped, perhaps using a defined character, such that the values are LIKE comparison "safe". Again, a quick comment about the allowed set of values, and in particular which character is used as an escape character, and go with Joel Spolsky's approach.
But, absent the specialized knowledge and a guarantee, it's important for us to at least consider handling those obscure corner cases, and consider whether the behavior is reasonable and "per the specification".
Other issues recapitulated
I believe others have already sufficiently pointed out some of the other commonly considered areas of concern:
SQL injection (taking what would appear to be user supplied information, and including that in the SQL text rather than supplying them through bind variables. Using bind variables isn't required, it's just one convenient approach to thwart with SQL injection. There are other ways to deal with it:
optimizer plan using index scan rather than index seeks, possible need for an expression or function for escaping wildcards (possible index on expression or function)
using literal values in place of bind variables impacts scalability
Conclusion
I like Joel Spolsky's approach. It's clever. And it works.
But as soon as I saw it, I immediately saw a potential problem with it, and it's not my nature to let it slide. I don't mean to be critical of the efforts of others. I know many developers take their work very personally, because they invest so much into it and they care so much about it. So please understand, this is not a personal attack. What I'm identifying here is the type of problem that crops up in production rather than testing.
You can pass the parameter as a string
So you have the string
DECLARE #tags
SET #tags = ‘ruby|rails|scruffy|rubyonrails’
select * from Tags
where Name in (SELECT item from fnSplit(#tags, ‘|’))
order by Count desc
Then all you have to do is pass the string as 1 parameter.
Here is the split function I use.
CREATE FUNCTION [dbo].[fnSplit](
#sInputList VARCHAR(8000) -- List of delimited items
, #sDelimiter VARCHAR(8000) = ',' -- delimiter that separates items
) RETURNS #List TABLE (item VARCHAR(8000))
BEGIN
DECLARE #sItem VARCHAR(8000)
WHILE CHARINDEX(#sDelimiter,#sInputList,0) <> 0
BEGIN
SELECT
#sItem=RTRIM(LTRIM(SUBSTRING(#sInputList,1,CHARINDEX(#sDelimiter,#sInputList,0)-1))),
#sInputList=RTRIM(LTRIM(SUBSTRING(#sInputList,CHARINDEX(#sDelimiter,#sInputList,0)+LEN(#sDelimiter),LEN(#sInputList))))
IF LEN(#sItem) > 0
INSERT INTO #List SELECT #sItem
END
IF LEN(#sInputList) > 0
INSERT INTO #List SELECT #sInputList -- Put the last item in
RETURN
END
I heard Jeff/Joel talk about this on the podcast today (episode 34, 2008-12-16 (MP3, 31 MB), 1 h 03 min 38 secs - 1 h 06 min 45 secs), and I thought I recalled Stack Overflow was using LINQ to SQL, but maybe it was ditched. Here's the same thing in LINQ to SQL.
var inValues = new [] { "ruby","rails","scruffy","rubyonrails" };
var results = from tag in Tags
where inValues.Contains(tag.Name)
select tag;
That's it. And, yes, LINQ already looks backwards enough, but the Contains clause seems extra backwards to me. When I had to do a similar query for a project at work, I naturally tried to do this the wrong way by doing a join between the local array and the SQL Server table, figuring the LINQ to SQL translator would be smart enough to handle the translation somehow. It didn't, but it did provide an error message that was descriptive and pointed me towards using Contains.
Anyway, if you run this in the highly recommended LINQPad, and run this query, you can view the actual SQL that the SQL LINQ provider generated. It'll show you each of the values getting parameterized into an IN clause.
If you are calling from .NET, you could use Dapper dot net:
string[] names = new string[] {"ruby","rails","scruffy","rubyonrails"};
var tags = dataContext.Query<Tags>(#"
select * from Tags
where Name in #names
order by Count desc", new {names});
Here Dapper does the thinking, so you don't have to. Something similar is possible with LINQ to SQL, of course:
string[] names = new string[] {"ruby","rails","scruffy","rubyonrails"};
var tags = from tag in dataContext.Tags
where names.Contains(tag.Name)
orderby tag.Count descending
select tag;
In SQL Server 2016+ you could use STRING_SPLIT function:
DECLARE #names NVARCHAR(MAX) = 'ruby,rails,scruffy,rubyonrails';
SELECT *
FROM Tags
WHERE Name IN (SELECT [value] FROM STRING_SPLIT(#names, ','))
ORDER BY [Count] DESC;
or:
DECLARE #names NVARCHAR(MAX) = 'ruby,rails,scruffy,rubyonrails';
SELECT t.*
FROM Tags t
JOIN STRING_SPLIT(#names,',')
ON t.Name = [value]
ORDER BY [Count] DESC;
LiveDemo
The accepted answer will of course work and it is one of the way to go, but it is anti-pattern.
E. Find rows by list of values
This is replacement for common anti-pattern such as creating a dynamic SQL string in application layer or Transact-SQL, or by using LIKE operator:
SELECT ProductId, Name, Tags
FROM Product
WHERE ',1,2,3,' LIKE '%,' + CAST(ProductId AS VARCHAR(20)) + ',%';
Addendum:
To improve the STRING_SPLIT table function row estimation, it is a good idea to materialize splitted values as temporary table/table variable:
DECLARE #names NVARCHAR(MAX) = 'ruby,rails,scruffy,rubyonrails,sql';
CREATE TABLE #t(val NVARCHAR(120));
INSERT INTO #t(val) SELECT s.[value] FROM STRING_SPLIT(#names, ',') s;
SELECT *
FROM Tags tg
JOIN #t t
ON t.val = tg.TagName
ORDER BY [Count] DESC;
SEDE - Live Demo
Related: How to Pass a List of Values Into a Stored Procedure
Original question has requirement SQL Server 2008. Because this question is often used as duplicate, I've added this answer as reference.
This is possibly a half nasty way of doing it, I used it once, was rather effective.
Depending on your goals it might be of use.
Create a temp table with one column.
INSERT each look-up value into that column.
Instead of using an IN, you can then just use your standard JOIN rules. ( Flexibility++ )
This has a bit of added flexibility in what you can do, but it's more suited for situations where you have a large table to query, with good indexing, and you want to use the parametrized list more than once. Saves having to execute it twice and have all the sanitation done manually.
I never got around to profiling exactly how fast it was, but in my situation it was needed.
We have function that creates a table variable that you can join to:
ALTER FUNCTION [dbo].[Fn_sqllist_to_table](#list AS VARCHAR(8000),
#delim AS VARCHAR(10))
RETURNS #listTable TABLE(
Position INT,
Value VARCHAR(8000))
AS
BEGIN
DECLARE #myPos INT
SET #myPos = 1
WHILE Charindex(#delim, #list) > 0
BEGIN
INSERT INTO #listTable
(Position,Value)
VALUES (#myPos,LEFT(#list, Charindex(#delim, #list) - 1))
SET #myPos = #myPos + 1
IF Charindex(#delim, #list) = Len(#list)
INSERT INTO #listTable
(Position,Value)
VALUES (#myPos,'')
SET #list = RIGHT(#list, Len(#list) - Charindex(#delim, #list))
END
IF Len(#list) > 0
INSERT INTO #listTable
(Position,Value)
VALUES (#myPos,#list)
RETURN
END
So:
#Name varchar(8000) = null // parameter for search values
select * from Tags
where Name in (SELECT value From fn_sqllist_to_table(#Name,',')))
order by Count desc
This is gross, but if you are guaranteed to have at least one, you could do:
SELECT ...
...
WHERE tag IN( #tag1, ISNULL( #tag2, #tag1 ), ISNULL( #tag3, #tag1 ), etc. )
Having IN( 'tag1', 'tag2', 'tag1', 'tag1', 'tag1' ) will be easily optimized away by SQL Server. Plus, you get direct index seeks
I would pass a table type parameter (since it's SQL Server 2008), and do a where exists, or inner join. You may also use XML, using sp_xml_preparedocument, and then even index that temporary table.
In my opinion, the best source to solve this problem, is what has been posted on this site:
Syscomments. Dinakar Nethi
CREATE FUNCTION dbo.fnParseArray (#Array VARCHAR(1000),#separator CHAR(1))
RETURNS #T Table (col1 varchar(50))
AS
BEGIN
--DECLARE #T Table (col1 varchar(50))
-- #Array is the array we wish to parse
-- #Separator is the separator charactor such as a comma
DECLARE #separator_position INT -- This is used to locate each separator character
DECLARE #array_value VARCHAR(1000) -- this holds each array value as it is returned
-- For my loop to work I need an extra separator at the end. I always look to the
-- left of the separator character for each array value
SET #array = #array + #separator
-- Loop through the string searching for separtor characters
WHILE PATINDEX('%' + #separator + '%', #array) <> 0
BEGIN
-- patindex matches the a pattern against a string
SELECT #separator_position = PATINDEX('%' + #separator + '%',#array)
SELECT #array_value = LEFT(#array, #separator_position - 1)
-- This is where you process the values passed.
INSERT into #T VALUES (#array_value)
-- Replace this select statement with your processing
-- #array_value holds the value of this element of the array
-- This replaces what we just processed with and empty string
SELECT #array = STUFF(#array, 1, #separator_position, '')
END
RETURN
END
Use:
SELECT * FROM dbo.fnParseArray('a,b,c,d,e,f', ',')
CREDITS FOR: Dinakar Nethi
The proper way IMHO is to store the list in a character string (limited in length by what the DBMS support); the only trick is that (in order to simplify processing) I have a separator (a comma in my example) at the beginning and at the end of the string. The idea is to "normalize on the fly", turning the list into a one-column table that contains one row per value. This allows you to turn
in (ct1,ct2, ct3 ... ctn)
into an
in (select ...)
or (the solution I'd probably prefer) a regular join, if you just add a "distinct" to avoid problems with duplicate values in the list.
Unfortunately, the techniques to slice a string are fairly product-specific.
Here is the SQL Server version:
with qry(n, names) as
(select len(list.names) - len(replace(list.names, ',', '')) - 1 as n,
substring(list.names, 2, len(list.names)) as names
from (select ',Doc,Grumpy,Happy,Sneezy,Bashful,Sleepy,Dopey,' names) as list
union all
select (n - 1) as n,
substring(names, 1 + charindex(',', names), len(names)) as names
from qry
where n > 1)
select n, substring(names, 1, charindex(',', names) - 1) dwarf
from qry;
The Oracle version:
select n, substr(name, 1, instr(name, ',') - 1) dwarf
from (select n,
substr(val, 1 + instr(val, ',', 1, n)) name
from (select rownum as n,
list.val
from (select ',Doc,Grumpy,Happy,Sneezy,Bashful,Sleepy,Dopey,' val
from dual) list
connect by level < length(list.val) -
length(replace(list.val, ',', ''))));
and the MySQL version:
select pivot.n,
substring_index(substring_index(list.val, ',', 1 + pivot.n), ',', -1) from (select 1 as n
union all
select 2 as n
union all
select 3 as n
union all
select 4 as n
union all
select 5 as n
union all
select 6 as n
union all
select 7 as n
union all
select 8 as n
union all
select 9 as n
union all
select 10 as n) pivot, (select ',Doc,Grumpy,Happy,Sneezy,Bashful,Sleepy,Dopey,' val) as list where pivot.n < length(list.val) -
length(replace(list.val, ',', ''));
(Of course, "pivot" must return as many rows as the maximum number of
items we can find in the list)
If you've got SQL Server 2008 or later I'd use a Table Valued Parameter.
If you're unlucky enough to be stuck on SQL Server 2005 you could add a CLR function like this,
[SqlFunction(
DataAccessKind.None,
IsDeterministic = true,
SystemDataAccess = SystemDataAccessKind.None,
IsPrecise = true,
FillRowMethodName = "SplitFillRow",
TableDefinintion = "s NVARCHAR(MAX)"]
public static IEnumerable Split(SqlChars seperator, SqlString s)
{
if (s.IsNull)
return new string[0];
return s.ToString().Split(seperator.Buffer);
}
public static void SplitFillRow(object row, out SqlString s)
{
s = new SqlString(row.ToString());
}
Which you could use like this,
declare #desiredTags nvarchar(MAX);
set #desiredTags = 'ruby,rails,scruffy,rubyonrails';
select * from Tags
where Name in [dbo].[Split] (',', #desiredTags)
order by Count desc
I think this is a case when a static query is just not the way to go. Dynamically build the list for your in clause, escape your single quotes, and dynamically build SQL. In this case you probably won't see much of a difference with any method due to the small list, but the most efficient method really is to send the SQL exactly as it is written in your post. I think it is a good habit to write it the most efficient way, rather than to do what makes the prettiest code, or consider it bad practice to dynamically build SQL.
I have seen the split functions take longer to execute than the query themselves in many cases where the parameters get large. A stored procedure with table valued parameters in SQL 2008 is the only other option I would consider, although this will probably be slower in your case. TVP will probably only be faster for large lists if you are searching on the primary key of the TVP, because SQL will build a temporary table for the list anyway (if the list is large). You won't know for sure unless you test it.
I have also seen stored procedures that had 500 parameters with default values of null, and having WHERE Column1 IN (#Param1, #Param2, #Param3, ..., #Param500). This caused SQL to build a temp table, do a sort/distinct, and then do a table scan instead of an index seek. That is essentially what you would be doing by parameterizing that query, although on a small enough scale that it won't make a noticeable difference. I highly recommend against having NULL in your IN lists, as if that gets changed to a NOT IN it will not act as intended. You could dynamically build the parameter list, but the only obvious thing that you would gain is that the objects would escape the single quotes for you. That approach is also slightly slower on the application end since the objects have to parse the query to find the parameters. It may or may not be faster on SQL, as parameterized queries call sp_prepare, sp_execute for as many times you execute the query, followed by sp_unprepare.
The reuse of execution plans for stored procedures or parameterized queries may give you a performance gain, but it will lock you in to one execution plan determined by the first query that is executed. That may be less than ideal for subsequent queries in many cases. In your case, reuse of execution plans will probably be a plus, but it might not make any difference at all as the example is a really simple query.
Cliffs notes:
For your case anything you do, be it parameterization with a fixed number of items in the list (null if not used), dynamically building the query with or without parameters, or using stored procedures with table valued parameters will not make much of a difference. However, my general recommendations are as follows:
Your case/simple queries with few parameters:
Dynamic SQL, maybe with parameters if testing shows better performance.
Queries with reusable execution plans, called multiple times by simply changing the parameters or if the query is complicated:
SQL with dynamic parameters.
Queries with large lists:
Stored procedure with table valued parameters. If the list can vary by a large amount use WITH RECOMPILE on the stored procedure, or simply use dynamic SQL without parameters to generate a new execution plan for each query.
May be we can use XML here:
declare #x xml
set #x='<items>
<item myvalue="29790" />
<item myvalue="31250" />
</items>
';
With CTE AS (
SELECT
x.item.value('#myvalue[1]', 'decimal') AS myvalue
FROM #x.nodes('//items/item') AS x(item) )
select * from YourTable where tableColumnName in (select myvalue from cte)
If we have strings stored inside the IN clause with the comma(,) delimited, we can use the charindex function to get the values. If you use .NET, then you can map with SqlParameters.
DDL Script:
CREATE TABLE Tags
([ID] int, [Name] varchar(20))
;
INSERT INTO Tags
([ID], [Name])
VALUES
(1, 'ruby'),
(2, 'rails'),
(3, 'scruffy'),
(4, 'rubyonrails')
;
T-SQL:
DECLARE #Param nvarchar(max)
SET #Param = 'ruby,rails,scruffy,rubyonrails'
SELECT * FROM Tags
WHERE CharIndex(Name,#Param)>0
You can use the above statement in your .NET code and map the parameter with SqlParameter.
Fiddler demo
EDIT:
Create the table called SelectedTags using the following script.
DDL Script:
Create table SelectedTags
(Name nvarchar(20));
INSERT INTO SelectedTags values ('ruby'),('rails')
T-SQL:
DECLARE #list nvarchar(max)
SELECT #list=coalesce(#list+',','')+st.Name FROM SelectedTags st
SELECT * FROM Tags
WHERE CharIndex(Name,#Param)>0
I'd approach this by default with passing a table valued function (that returns a table from a string) to the IN condition.
Here is the code for the UDF (I got it from Stack Overflow somewhere, i can't find the source right now)
CREATE FUNCTION [dbo].[Split] (#sep char(1), #s varchar(8000))
RETURNS table
AS
RETURN (
WITH Pieces(pn, start, stop) AS (
SELECT 1, 1, CHARINDEX(#sep, #s)
UNION ALL
SELECT pn + 1, stop + 1, CHARINDEX(#sep, #s, stop + 1)
FROM Pieces
WHERE stop > 0
)
SELECT
SUBSTRING(#s, start, CASE WHEN stop > 0 THEN stop-start ELSE 512 END) AS s
FROM Pieces
)
Once you got this your code would be as simple as this:
select * from Tags
where Name in (select s from dbo.split(';','ruby;rails;scruffy;rubyonrails'))
order by Count desc
Unless you have a ridiculously long string, this should work well with the table index.
If needed you can insert it into a temp table, index it, then run a join...
For a variable number of arguments like this the only way I'm aware of is to either generate the SQL explicitly or do something that involves populating a temporary table with the items you want and joining against the temp table.
Another possible solution is instead of passing a variable number of arguments to a stored procedure, pass a single string containing the names you're after, but make them unique by surrounding them with '<>'. Then use PATINDEX to find the names:
SELECT *
FROM Tags
WHERE PATINDEX('%<' + Name + '>%','<jo>,<john>,<scruffy>,<rubyonrails>') > 0
Use the following stored procedure. It uses a custom split function, which can be found here.
create stored procedure GetSearchMachingTagNames
#PipeDelimitedTagNames varchar(max),
#delimiter char(1)
as
begin
select * from Tags
where Name in (select data from [dbo].[Split](#PipeDelimitedTagNames,#delimiter)
end
Here is another alternative. Just pass a comma-delimited list as a string parameter to the stored procedure and:
CREATE PROCEDURE [dbo].[sp_myproc]
#UnitList varchar(MAX) = '1,2,3'
AS
select column from table
where ph.UnitID in (select * from CsvToInt(#UnitList))
And the function:
CREATE Function [dbo].[CsvToInt] ( #Array varchar(MAX))
returns #IntTable table
(IntValue int)
AS
begin
declare #separator char(1)
set #separator = ','
declare #separator_position int
declare #array_value varchar(MAX)
set #array = #array + ','
while patindex('%,%' , #array) <> 0
begin
select #separator_position = patindex('%,%' , #array)
select #array_value = left(#array, #separator_position - 1)
Insert #IntTable
Values (Cast(#array_value as int))
select #array = stuff(#array, 1, #separator_position, '')
end
return
end
In ColdFusion we just do:
<cfset myvalues = "ruby|rails|scruffy|rubyonrails">
<cfquery name="q">
select * from sometable where values in <cfqueryparam value="#myvalues#" list="true">
</cfquery>
Here's a technique that recreates a local table to be used in a query string. Doing it this way eliminates all parsing problems.
The string can be built in any language. In this example I used SQL since that was the original problem I was trying to solve. I needed a clean way to pass in table data on the fly in a string to be executed later.
Using a user defined type is optional. Creating the type is only created once and can be done ahead of time. Otherwise just add a full table type to the declaration in the string.
The general pattern is easy to extend and can be used for passing more complex tables.
-- Create a user defined type for the list.
CREATE TYPE [dbo].[StringList] AS TABLE(
[StringValue] [nvarchar](max) NOT NULL
)
-- Create a sample list using the list table type.
DECLARE #list [dbo].[StringList];
INSERT INTO #list VALUES ('one'), ('two'), ('three'), ('four')
-- Build a string in which we recreate the list so we can pass it to exec
-- This can be done in any language since we're just building a string.
DECLARE #str nvarchar(max);
SET #str = 'DECLARE #list [dbo].[StringList]; INSERT INTO #list VALUES '
-- Add all the values we want to the string. This would be a loop in C++.
SELECT #str = #str + '(''' + StringValue + '''),' FROM #list
-- Remove the trailing comma so the query is valid sql.
SET #str = substring(#str, 1, len(#str)-1)
-- Add a select to test the string.
SET #str = #str + '; SELECT * FROM #list;'
-- Execute the string and see we've pass the table correctly.
EXEC(#str)
In SQL Server 2016+ another possibility is to use the OPENJSON function.
This approach is blogged about in OPENJSON - one of best ways to select rows by list of ids.
A full worked example below
CREATE TABLE dbo.Tags
(
Name VARCHAR(50),
Count INT
)
INSERT INTO dbo.Tags
VALUES ('VB',982), ('ruby',1306), ('rails',1478), ('scruffy',1), ('C#',1784)
GO
CREATE PROC dbo.SomeProc
#Tags VARCHAR(MAX)
AS
SELECT T.*
FROM dbo.Tags T
WHERE T.Name IN (SELECT J.Value COLLATE Latin1_General_CI_AS
FROM OPENJSON(CONCAT('[', #Tags, ']')) J)
ORDER BY T.Count DESC
GO
EXEC dbo.SomeProc #Tags = '"ruby","rails","scruffy","rubyonrails"'
DROP TABLE dbo.Tags
I have an answer that doesn't require a UDF, XML
Because IN accepts a select statement
e.g. SELECT * FROM Test where Data IN (SELECT Value FROM TABLE)
You really only need a way to convert the string into a table.
This can be done with a recursive CTE, or a query with a number table (or Master..spt_value)
Here's the CTE version.
DECLARE #InputString varchar(8000) = 'ruby,rails,scruffy,rubyonrails'
SELECT #InputString = #InputString + ','
;WITH RecursiveCSV(x,y)
AS
(
SELECT
x = SUBSTRING(#InputString,0,CHARINDEX(',',#InputString,0)),
y = SUBSTRING(#InputString,CHARINDEX(',',#InputString,0)+1,LEN(#InputString))
UNION ALL
SELECT
x = SUBSTRING(y,0,CHARINDEX(',',y,0)),
y = SUBSTRING(y,CHARINDEX(',',y,0)+1,LEN(y))
FROM
RecursiveCSV
WHERE
SUBSTRING(y,CHARINDEX(',',y,0)+1,LEN(y)) <> '' OR
SUBSTRING(y,0,CHARINDEX(',',y,0)) <> ''
)
SELECT
*
FROM
Tags
WHERE
Name IN (select x FROM RecursiveCSV)
OPTION (MAXRECURSION 32767);
I use a more concise version of the top voted answer:
List<SqlParameter> parameters = tags.Select((s, i) => new SqlParameter("#tag" + i.ToString(), SqlDbType.NVarChar(50)) { Value = s}).ToList();
var whereCondition = string.Format("tags in ({0})", String.Join(",",parameters.Select(s => s.ParameterName)));
It does loop through the tag parameters twice; but that doesn't matter most of the time (it won't be your bottleneck; if it is, unroll the loop).
If you're really interested in performance and don't want to iterate through the loop twice, here's a less beautiful version:
var parameters = new List<SqlParameter>();
var paramNames = new List<string>();
for (var i = 0; i < tags.Length; i++)
{
var paramName = "#tag" + i;
//Include size and set value explicitly (not AddWithValue)
//Because SQL Server may use an implicit conversion if it doesn't know
//the actual size.
var p = new SqlParameter(paramName, SqlDbType.NVarChar(50) { Value = tags[i]; }
paramNames.Add(paramName);
parameters.Add(p);
}
var inClause = string.Join(",", paramNames);
Here is another answer to this problem.
(new version posted on 6/4/13).
private static DataSet GetDataSet(SqlConnectionStringBuilder scsb, string strSql, params object[] pars)
{
var ds = new DataSet();
using (var sqlConn = new SqlConnection(scsb.ConnectionString))
{
var sqlParameters = new List<SqlParameter>();
var replacementStrings = new Dictionary<string, string>();
if (pars != null)
{
for (int i = 0; i < pars.Length; i++)
{
if (pars[i] is IEnumerable<object>)
{
List<object> enumerable = (pars[i] as IEnumerable<object>).ToList();
replacementStrings.Add("#" + i, String.Join(",", enumerable.Select((value, pos) => String.Format("#_{0}_{1}", i, pos))));
sqlParameters.AddRange(enumerable.Select((value, pos) => new SqlParameter(String.Format("#_{0}_{1}", i, pos), value ?? DBNull.Value)).ToArray());
}
else
{
sqlParameters.Add(new SqlParameter(String.Format("#{0}", i), pars[i] ?? DBNull.Value));
}
}
}
strSql = replacementStrings.Aggregate(strSql, (current, replacementString) => current.Replace(replacementString.Key, replacementString.Value));
using (var sqlCommand = new SqlCommand(strSql, sqlConn))
{
if (pars != null)
{
sqlCommand.Parameters.AddRange(sqlParameters.ToArray());
}
else
{
//Fail-safe, just in case a user intends to pass a single null parameter
sqlCommand.Parameters.Add(new SqlParameter("#0", DBNull.Value));
}
using (var sqlDataAdapter = new SqlDataAdapter(sqlCommand))
{
sqlDataAdapter.Fill(ds);
}
}
}
return ds;
}
Cheers.
The only winning move is not to play.
No infinite variability for you. Only finite variability.
In the SQL you have a clause like this:
and ( {1}==0 or b.CompanyId in ({2},{3},{4},{5},{6}) )
In the C# code you do something like this:
int origCount = idList.Count;
if (origCount > 5) {
throw new Exception("You may only specify up to five originators to filter on.");
}
while (idList.Count < 5) { idList.Add(-1); } // -1 is an impossible value
return ExecuteQuery<PublishDate>(getValuesInListSQL,
origCount,
idList[0], idList[1], idList[2], idList[3], idList[4]);
So basically if the count is 0 then there is no filter and everything goes through. If the count is higher than 0 the then the value must be in the list, but the list has been padded out to five with impossible values (so that the SQL still makes sense)
Sometimes the lame solution is the only one that actually works.

Select with Left Function Condition SQL Server [duplicate]

I've been using this for some time:
SUBSTRING(str_col, PATINDEX('%[^0]%', str_col), LEN(str_col))
However recently, I've found a problem with columns with all "0" characters like '00000000' because it never finds a non-"0" character to match.
An alternative technique I've seen is to use TRIM:
REPLACE(LTRIM(REPLACE(str_col, '0', ' ')), ' ', '0')
This has a problem if there are embedded spaces, because they will be turned into "0"s when the spaces are turned back into "0"s.
I'm trying to avoid a scalar UDF. I've found a lot of performance problems with UDFs in SQL Server 2005.
SUBSTRING(str_col, PATINDEX('%[^0]%', str_col+'.'), LEN(str_col))
Why don't you just cast the value to INTEGER and then back to VARCHAR?
SELECT CAST(CAST('000000000' AS INTEGER) AS VARCHAR)
--------
0
Other answers here to not take into consideration if you have all-zero's (or even a single zero).
Some always default an empty string to zero, which is wrong when it is supposed to remain blank.
Re-read the original question. This answers what the Questioner wants.
Solution #1:
--This example uses both Leading and Trailing zero's.
--Avoid losing those Trailing zero's and converting embedded spaces into more zeros.
--I added a non-whitespace character ("_") to retain trailing zero's after calling Replace().
--Simply remove the RTrim() function call if you want to preserve trailing spaces.
--If you treat zero's and empty-strings as the same thing for your application,
-- then you may skip the Case-Statement entirely and just use CN.CleanNumber .
DECLARE #WackadooNumber VarChar(50) = ' 0 0123ABC D0 '--'000'--
SELECT WN.WackadooNumber, CN.CleanNumber,
(CASE WHEN WN.WackadooNumber LIKE '%0%' AND CN.CleanNumber = '' THEN '0' ELSE CN.CleanNumber END)[AllowZero]
FROM (SELECT #WackadooNumber[WackadooNumber]) AS WN
OUTER APPLY (SELECT RTRIM(RIGHT(WN.WackadooNumber, LEN(LTRIM(REPLACE(WN.WackadooNumber + '_', '0', ' '))) - 1))[CleanNumber]) AS CN
--Result: "123ABC D0"
Solution #2 (with sample data):
SELECT O.Type, O.Value, Parsed.Value[WrongValue],
(CASE WHEN CHARINDEX('0', T.Value) > 0--If there's at least one zero.
AND LEN(Parsed.Value) = 0--And the trimmed length is zero.
THEN '0' ELSE Parsed.Value END)[FinalValue],
(CASE WHEN CHARINDEX('0', T.Value) > 0--If there's at least one zero.
AND LEN(Parsed.TrimmedValue) = 0--And the trimmed length is zero.
THEN '0' ELSE LTRIM(RTRIM(Parsed.TrimmedValue)) END)[FinalTrimmedValue]
FROM
(
VALUES ('Null', NULL), ('EmptyString', ''),
('Zero', '0'), ('Zero', '0000'), ('Zero', '000.000'),
('Spaces', ' 0 A B C '), ('Number', '000123'),
('AlphaNum', '000ABC123'), ('NoZero', 'NoZerosHere')
) AS O(Type, Value)--O is for Original.
CROSS APPLY
( --This Step is Optional. Use if you also want to remove leading spaces.
SELECT LTRIM(RTRIM(O.Value))[Value]
) AS T--T is for Trimmed.
CROSS APPLY
( --From #CadeRoux's Post.
SELECT SUBSTRING(O.Value, PATINDEX('%[^0]%', O.Value + '.'), LEN(O.Value))[Value],
SUBSTRING(T.Value, PATINDEX('%[^0]%', T.Value + '.'), LEN(T.Value))[TrimmedValue]
) AS Parsed
Results:
Summary:
You could use what I have above for a one-off removal of leading-zero's.
If you plan on reusing it a lot, then place it in an Inline-Table-Valued-Function (ITVF).
Your concerns about performance problems with UDF's is understandable.
However, this problem only applies to All-Scalar-Functions and Multi-Statement-Table-Functions.
Using ITVF's is perfectly fine.
I have the same problem with our 3rd-Party database.
With Alpha-Numeric fields many are entered in without the leading spaces, dang humans!
This makes joins impossible without cleaning up the missing leading-zeros.
Conclusion:
Instead of removing the leading-zeros, you may want to consider just padding your trimmed-values with leading-zeros when you do your joins.
Better yet, clean up your data in the table by adding leading zeros, then rebuilding your indexes.
I think this would be WAY faster and less complex.
SELECT RIGHT('0000000000' + LTRIM(RTRIM(NULLIF(' 0A10 ', ''))), 10)--0000000A10
SELECT RIGHT('0000000000' + LTRIM(RTRIM(NULLIF('', ''))), 10)--NULL --When Blank.
Instead of a space replace the 0's with a 'rare' whitespace character that shouldn't normally be in the column's text. A line feed is probably good enough for a column like this. Then you can LTrim normally and replace the special character with 0's again.
My version of this is an adaptation of Arvo's work, with a little more added on to ensure two other cases.
1) If we have all 0s, we should return the digit 0.
2) If we have a blank, we should still return a blank character.
CASE
WHEN PATINDEX('%[^0]%', str_col + '.') > LEN(str_col) THEN RIGHT(str_col, 1)
ELSE SUBSTRING(str_col, PATINDEX('%[^0]%', str_col + '.'), LEN(str_col))
END
The following will return '0' if the string consists entirely of zeros:
CASE WHEN SUBSTRING(str_col, PATINDEX('%[^0]%', str_col+'.'), LEN(str_col)) = '' THEN '0' ELSE SUBSTRING(str_col, PATINDEX('%[^0]%', str_col+'.'), LEN(str_col)) END AS str_col
This makes a nice Function....
DROP FUNCTION [dbo].[FN_StripLeading]
GO
CREATE FUNCTION [dbo].[FN_StripLeading] (#string VarChar(128), #stripChar VarChar(1))
RETURNS VarChar(128)
AS
BEGIN
-- http://stackoverflow.com/questions/662383/better-techniques-for-trimming-leading-zeros-in-sql-server
DECLARE #retVal VarChar(128),
#pattern varChar(10)
SELECT #pattern = '%[^'+#stripChar+']%'
SELECT #retVal = CASE WHEN SUBSTRING(#string, PATINDEX(#pattern, #string+'.'), LEN(#string)) = '' THEN #stripChar ELSE SUBSTRING(#string, PATINDEX(#pattern, #string+'.'), LEN(#string)) END
RETURN (#retVal)
END
GO
GRANT EXECUTE ON [dbo].[FN_StripLeading] TO PUBLIC
cast(value as int) will always work if string is a number
SELECT CAST(CAST('000000000' AS INTEGER) AS VARCHAR)
This has a limit on the length of the string that can be converted to an INT
If you are using Snowflake SQL, might use this:
ltrim(str_col,'0')
The ltrim function removes all instances of the designated set of characters from the left side.
So ltrim(str_col,'0') on '00000008A' would return '8A'
And rtrim(str_col,'0.') on '$125.00' would return '$125'
This might help
SELECT ABS(column_name) FROM [db].[schema].[table]
replace(ltrim(replace(Fieldname.TableName, '0', '')), '', '0')
The suggestion from Thomas G worked for our needs.
The field in our case was already string and only the leading zeros needed to be trimmed. Mostly it's all numeric but sometimes there are letters so the previous INT conversion would crash.
For converting number as varchar to int, you could also use simple
(column + 0)
Very easy way, when you just work with numeric values:
SELECT
TRY_CONVERT(INT, '000053830')
Try this:
replace(ltrim(replace(#str, '0', ' ')), ' ', '0')
If you do not want to convert into int, I prefer this below logic because it can handle nulls
IFNULL(field,LTRIM(field,'0'))
SUBSTRING(str_col, IIF(LEN(str_col) > 0, PATINDEX('%[^0]%', LEFT(str_col, LEN(str_col) - 1) + '.'), 0), LEN(str_col))
Works fine even with '0', '00' and so on.
Starting with SQL Server 2022 (16.x) you can do this
TRIM ( [ LEADING | TRAILING | BOTH ] [characters FROM ] string )
In MySQL you can do this...
Trim(Leading '0' from your_column)

Right pad a string with variable number of spaces

I have a customer table that I want to use to populate a parameter box in SSRS 2008. The cust_num is the value and the concatenation of the cust_name and cust_addr will be the label. The required fields from the table are:
cust_num int PK
cust_name char(50) not null
cust_addr char(50)
The SQL is:
select cust_num, cust_name + isnull(cust_addr, '') address
from customers
Which gives me this in the parameter list:
FIRST OUTPUT - ACTUAL
1 cust1 addr1
2 customer2 addr2
Which is what I expected but I want:
SECOND OUTPUT - DESIRED
1 cust1 addr1
2 customer2 addr2
What I have tried:
select cust_num, rtrim(cust_name) + space(60 - len(cust_name)) +
rtrim(cust_addr) + space(60 - len(cust_addr)) customer
from customers
Which gives me the first output.
select cust_num, rtrim(cust_name) + replicate(char(32), 60 - len(cust_name)) +
rtrim(cust_addr) + replicate(char(32), 60 - len(cust_addr)) customer
Which also gives me the first output.
I have also tried replacing space() with char(32) and vice versa
I have tried variations of substring, left, right all to no avail.
I have also used ltrim and rtrim in various spots.
The reason for the 60 is that I have checked the max length in both fields and it is 50 and I want some whitespace between the fields even if the field is maxed. I am not really concerned about truncated data since the city, state, and zip are in different fields so if the end of the street address is chopped off it is ok, I guess.
This is not a show stopper, the SSRS report is currently deployed with the first output but I would like to make it cleaner if I can.
Whammo blammo (for leading spaces):
SELECT
RIGHT(space(60) + cust_name, 60),
RIGHT(space(60) + cust_address, 60)
OR (for trailing spaces)
SELECT
LEFT(cust_name + space(60), 60),
LEFT(cust_address + space(60), 60),
The easiest way to right pad a string with spaces (without them being trimmed) is to simply cast the string as CHAR(length). MSSQL will sometimes trim whitespace from VARCHAR (because it is a VARiable-length data type). Since CHAR is a fixed length datatype, SQL Server will never trim the trailing spaces, and will automatically pad strings that are shorter than its length with spaces. Try the following code snippet for example.
SELECT CAST('Test' AS CHAR(20))
This returns the value 'Test '.
This is based on Jim's answer,
SELECT
#field_text + SPACE(#pad_length - LEN(#field_text)) AS RightPad
,SPACE(#pad_length - LEN(#field_text)) + #field_text AS LeftPad
Advantages
More Straight Forward
Slightly Cleaner (IMO)
Faster (Maybe?)
Easily Modified to either double pad for displaying in non-fixed width fonts or split padding left and right to center
Disadvantages
Doesn't handle LEN(#field_text) > #pad_length
Based on KMier's answer, addresses the comment that this method poses a problem when the field to be padded is not a field, but the outcome of a (possibly complicated) function; the entire function has to be repeated.
Also, this allows for padding a field to the maximum length of its contents.
WITH
cte AS (
SELECT 'foo' AS value_to_be_padded
UNION SELECT 'foobar'
),
cte_max AS (
SELECT MAX(LEN(value_to_be_padded)) AS max_len
)
SELECT
CONCAT(SPACE(max_len - LEN(value_to_be_padded)), value_to_be_padded AS left_padded,
CONCAT(value_to_be_padded, SPACE(max_len - LEN(value_to_be_padded)) AS right_padded;
declare #t table(f1 varchar(50),f2 varchar(50),f3 varchar(50))
insert into #t values
('foooo','fooooooo','foo')
,('foo','fooooooo','fooo')
,('foooooooo','fooooooo','foooooo')
select
concat(f1
,space(max(len(f1)) over () - len(f1))
,space(3)
,f2
,space(max(len(f2)) over () - len(f2))
,space(3)
,f3
)
from #t
result
foooo fooooooo foo
foo fooooooo fooo
foooooooo fooooooo foooooo

Resources