I have a query where I input username as a single string:
'MONTY,JONTY'
My query part looks like:
SELECT *
FROM tbl_dummy
WHERE username IN (SELECT regexp_substr(:username, '[^,]+', 1, level)
FROM dual
CONNECT BY regexp_substr(:username, '[^,]+', 1, level) IS NOT NULL);
Here my column 'Username' will have data like:
Monty, Jonty
Monty
Jonty
Jonty, Monty
So when I pass my string, it will split i.e 'Monty', 'Jonty' and
the query comparison will miss two values "Monty, Jonty" and "Jonty, Monty" out of 4 rows.
If i was able to split my column value while comparison, then i could have proved LHS = RHS.
So it would be ('JONTY','MONTY') = ('MONTY','JONTY')
Is there any way this functionality can be achieved ? I cannot write stored procedure so it has to be an oracle query.
Also, Has anyone used RegExp_Like for such thing ? I am not able to find a syntax which would fit this code.
If I understand your question correctly, and if your comma-delimited values aren't too long (regexes in Oracle are limited to 512 bytes IIRC), you can replace the comma-delimited list with a pipe (|)-delimited list and use `REGEXP_LIKE()' as follows:
WITH u1 AS (
SELECT 'MONTY, JONTY' AS username FROM dual
UNION
SELECT 'MONTY' AS username FROM dual
UNION
SELECT 'JONTY' AS username FROM dual
UNION
SELECT 'JONTY, MONTY' AS username FROM dual
), u2 AS (
SELECT TRIM(REGEXP_SUBSTR('JONTY, MONTY','[^,]+', 1, LEVEL)) AS username FROM dual
CONNECT BY REGEXP_SUBSTR('JONTY, MONTY', '[^,]+', 1, LEVEL) IS NOT NULL
)
SELECT * FROM u1
WHERE EXISTS (
SELECT 1 FROM u2
WHERE REGEXP_LIKE(u2.username, '^(' || REGEXP_REPLACE(u1.username, '\s*,\s*', '|') || ')$', 'i')
)
Simply use LIKE operator.
where column_name like '℅MONTY℅' or column_name like '℅JONTY℅'
You are trying to use varying-IN list in the predicate. In your case, 'MONTY, JONTY' is a single string.
Related
i have this code that works in oracle, to a certain degree
with data(str) as (
select '3|BUTE PLACE|BUTE PLACE' from dual union all
select '3 BUTE PLACE BUTE PLACE' from dual union all
select '3 BUTE PLACE BUTE-PLACE' from dual)
select str, rtrim(str_new, ' ') new_str
from data
model
partition by (rownum rn)
dimension by (0 dim)
measures(str, str||' ' str_new)
rules
iterate(10000) until (str_new[0] = previous(str_new[0]))
(str_new[0]=regexp_replace(str_new[0],'(^| )([^ ]+ )(.*? )?\2+','\1\2\3'))
what i'm trying to figure out is how to use this code in snowflake where my address line is | separated and i want to be able to turn '3|BUTE PLACE|BUTE PLACE' into '3|BUTE PLACE'
for the purpose of address matching.
Thanks
I'm creating an SSRS report and during writing the query I look up the code logic for the data that needs to be retrieved in query. There is lots of usage of !String.IsNullOrEmpty method so I want to know what is the shortest and best way to do the equivalent check in SQL server?
WHERE t.Name IS NOT NULL OR T.Name != ''
or....
WHERE LEN(t.Name) > 0
which one is correct? Or is there any other alternative?
There is no built-in equivalent of IsNullOrEmpty() in T-SQL, largely due to the fact that trailing blanks are ignored when comparing strings. One of your options:
where len(t.Name) > 0
would be enough as len() ignores trailing spaces too. Unfortunately it can make the expression non-SARGable, and any index on the Name column might not be used. However, this one should do the trick:
where t.Name > ''
P.S. For the sake of completeness, the datalength() function takes all characters into account; keep in mind however that it returns the number of bytes, not characters, so for any nvarchar value the result will be at least double of what you might expect (and with supplementary characters / surrogate pairs the number should be even higher, if my memory serves).
If the desired result is the simplest possible one-liner then:
WHERE NULLIF(Name, '') IS NOT NULL
However, from a performance point of view following alternative is SARGable, therefore indexes potentially can be used to spot and filter out records with such values
WHERE Name IS NOT NULL AND Name != ''
An example:
;WITH cte AS (
SELECT 1 AS ID, '' AS Name UNION ALL
SELECT 2, ' ' UNION ALL
SELECT 3, NULL UNION ALL
SELECT 4, 'abc'
)
SELECT * FROM cte
WHERE Name IS NOT NULL AND Name != ''
Results to:
ID Name
---------
4 abc
Yes, you can use WHERE LEN(t.Name)>0.
You can also verify as below:
-- Count the Total number of records
SELECT COUNT(1) FROM tblName as t
-- Count the Total number of 'NULL' or 'Blank' records
SELECT COUNT(1) FROM tblName as t WHERE ISNULL(t.Name,'')= ''
-- Count the Total number of 'NOT NULL' records
SELECT COUNT(1) FROM tblName as t WHERE LEN(t.Name)>0
Thanks.
Like in the example here, I want to distinct count across BigQuery arrays: Distinct Count across Bigquery arrays
However, I have a few extra requirements that make the solution provided in that post feasable for me:
The solution must not use UDFs (too slow)
The solution must not use the HLL function (must be exact)
The solution must not use the SELECT from SELECT pattern displayed on the linked solution, as it needs to aggregate on a flexible group of dimensions that is selected by an end user using a BI tool
So, while this extended example (containing user as a grouping dimension) works using HLL:
#standardSQL
WITH
test AS (
SELECT
'A' AS User, DATE('2018-01-01') AS ReportDate, 2 AS value, [1,2,3] AS key
UNION ALL
SELECT
'A' AS User, DATE('2018-01-02') AS ReportDate, 3 AS value, [1,4,5] AS key
UNION ALL
SELECT
'B' AS User, DATE('2018-01-02') AS ReportDate, 4 AS value, [4,5,6,7,8] AS key
UNION ALL
SELECT
'B' AS User, DATE('2018-01-02') AS ReportDate, 5 AS value, [3,4,5,6,7] AS key )
SELECT
User,
SUM(value) total_value,
HLL_COUNT.MERGE((
SELECT
HLL_COUNT.INIT(key)
FROM
UNNEST(key) key)) AS unique_key_count
FROM
test
GROUP BY
user
I need a version that accomplishes this distinct aggregated array counting with the requirements mentioned above.
Again, this means it also should work properly if I group only on ReportDate, an combination of User / ReportDate or a scenario where this example is extended with additional dimensions.
#standardSQL
WITH test AS
(
SELECT 'A' AS User, DATE('2018-01-01') AS ReportDate, 2 AS value, [1,2,3] AS key UNION ALL
SELECT 'A' AS User, DATE('2018-01-02') AS ReportDate, 3 AS value, [1,4,5] AS key UNION ALL
SELECT 'B' AS User, DATE('2018-01-02') AS ReportDate, 4 AS value, [4,5,6,7,8] AS key UNION ALL
SELECT 'B' AS User, DATE('2018-01-02') AS ReportDate, 5 AS value, [3,4,5,6,7] AS key
)
SELECT
User,
SUM(IF(flag=0, value, 0)) total_value,
COUNT(DISTINCT key) unique_key_count
FROM test, UNNEST(key) key WITH OFFSET flag
GROUP BY User
with result
Row User total_value unique_key_count
1 A 5 5
2 B 9 6
Well I had asked the same question for jquery on here, Now my question is same with SQL Server Query :) But this time this is not comma separated, this is separate row in Database like
I have separated rows having float numbers.
Name
K1.1
K1.10
K1.2
K3.1
K3.14
K3.5
and I want to sort this float numbers like,
Name
K1.1
K1.2
K1.10
K3.1
K3.5
K3.14
actually in my case, the numbers which are after decimals will consider as a natural numbers, so 1.2 will consider as '2' and 1.10 will consider as '10' thats why 1.2 will come first than 1.10.
You can remove 'K' because it is almost common and suggestion or example would be great for me, thanks.
You can use PARSENAME (which is more of a hack) or String functions like CHARINDEX , STUFF, LEFT etc to achieve this.
Input data
;WITH CTE AS
(
SELECT 'K1.1' Name
UNION ALL SELECT 'K1.10'
UNION ALL SELECT 'K1.2'
UNION ALL SELECT 'K3.1'
UNION ALL SELECT 'K3.14'
UNION ALL SELECT 'K3.5'
)
Using PARSENAME
SELECT Name,PARSENAME(REPLACE(Name,'K',''),2),PARSENAME(REPLACE(Name,'K',''),1)
FROM CTE
ORDER BY CONVERT(INT,PARSENAME(REPLACE(Name,'K',''),2)),
CONVERT(INT,PARSENAME(REPLACE(Name,'K',''),1))
Using String Functions
SELECT Name,LEFT(Name,CHARINDEX('.',Name) - 1), STUFF(Name,1,CHARINDEX('.',Name),'')
FROM CTE
ORDER BY CONVERT(INT,REPLACE((LEFT(Name,CHARINDEX('.',Name) - 1)),'K','')),
CONVERT(INT,STUFF(Name,1,CHARINDEX('.',Name),''))
Output
K1.1 K1 1
K1.2 K1 2
K1.10 K1 10
K3.1 K3 1
K3.5 K3 5
K3.14 K3 14
This works if there is always one char before the first number and the number is not higher than 9:
SELECT name
FROM YourTable
ORDER BY CAST(SUBSTRING(name,2,1) AS INT), --Get the number before dot
CAST(RIGHT(name,LEN(name)-CHARINDEX('.',name)) AS INT) --Get the number after the dot
Perhaps, more verbal, but should do the trick
declare #source as table(num varchar(12));
insert into #source(num) values('K1.1'),('K1.10'),('K1.2'),('K3.1'),('K3.14'),('K3.5');
-- create helper table
with data as
(
select num,
cast(SUBSTRING(replace(num, 'K', ''), 1, CHARINDEX('.', num) - 2) as int) as [first],
cast(SUBSTRING(replace(num, 'K', ''), CHARINDEX('.', num), LEN(num)) as int) as [second]
from #source
)
-- Select and order accordingly
select num
from data
order by [first], [second]
sqlfiddle:
http://sqlfiddle.com/#!6/a9b06/2
The shorter solution is this one :
Select Num
from yourtable
order by cast((Parsename(Num, 1) ) as Int)
I am creating reports in SSIS using datasets and have the following SQL requirement:
The sql is returning three rows:
a
b
c
Is there anyway I can have the SQL return an additioanal row without adding data to the table?
Thanks in advance,
Bruce
select MyCol from MyTable
union all
select 'something' as MyCol
You can use a UNION ALL to include a new row.
SELECT *
FROM yourTable
UNION ALL
SELECT 'newRow'
The number of columns needs to be the same between the top query and the bottom one. So if you first query has one column, then the second would also need one column.
If you need to add multiple values there is a stranger syntax available:
declare #Footy as VarChar(16) = 'soccer'
select 'a' as Thing, 42 as Thingosity -- Your original SELECT goes here.
union all
select *
from ( values ( 'b', 2 ), ( 'c', 3 ), ( #Footy, Len( #Footy ) ) ) as Placeholder ( Thing, Thingosity )