I have the following values in the SQL Server table:
But I need to build query from which output look like this:
I know that I should probably use combination of substring and charindex but I have no idea how to do it.
Could you please help me how the query should like?
Thank you!
Try the following, it may work.
SELECT
offerId,
cTypes
FROM yourTable AS mt
CROSS APPLY
EXPLODE(mt.contractTypes) AS dp(cTypes);
You can use string_split function :
select t.offerid, trim(translate(tt.value, '[]"', ' ')) as contractTypes
from table t cross apply
string_split(t.contractTypes, ',') tt(value);
The data in each row in the contractTypes column is a valid JSON array, so you may use OPENJSON() with explicit schema (result is a table with columns defined in the WITH clause) to parse this array and get the expected results:
Table:
CREATE TABLE Data (
offerId int,
contractTypes varchar(1000)
)
INSERT INTO Data
(offerId, contractTypes)
VALUES
(1, '[ "Hlavni pracovni pomer" ]'),
(2, '[ "ÖCVS", "Staz", "Prahovne" ]')
Table:
SELECT d.offerId, j.contractTypes
FROM Data d
OUTER APPLY OPENJSON(d.contractTypes) WITH (contractTypes varchar(100) '$') j
Result:
offerId contractTypes
1 Hlavni pracovni pomer
2 ÖCVS
2 Staz
2 Prahovne
As an additional option, if you want to return the position of the contract type in the contractTypes array, you may use OPENJSON() with default schema (result is a table with columns key, value and type and the value in the key column is the 0-based index of the element in the array):
SELECT
d.offerId,
CONVERT(int, j.[key]) + 1 AS contractId,
j.[value] AS contractType
FROM Data d
OUTER APPLY OPENJSON(d.contractTypes) j
ORDER BY CONVERT(int, j.[key])
Result:
offerId contractId contractType
1 1 Hlavni pracovni pomer
2 1 ÖCVS
2 2 Staz
2 3 Prahovne
Related
SQL Server 2017 on Azure.
Given a field called Categories in a table called dbo.sources:
ID Categories
1 ABC01, FFG02, ERERE, CC201
2 GDF01, ABC01, GREER, DS223
3 DSF12, GREER
4 ABC01
5 NULL
What is the syntax for a query that would remove ABC01 from any record where it exists, but keep the other codes in the string?
Results would be:
ID Categories
1 AFFG02, ERERE, CC201
2 GDF01, GREER, DS223
3 DSF12, GREER
4 NULL
5 NULL
Normalising and then denormalising your data, you can do this:
USE Sandbox;
GO
CREATE TABLE dbo.Sources (ID int,
Categories varchar(MAX));
INSERT INTO dbo.Sources
VALUES (1,'ABC01,FFG02,ERERE,CC201'), --I **assume you don't really have the space)
(2,'GDF01,ABC01,GREER,DS223'),
(3,'DSF12,GREER'),
(4,'ABC01'),
(5,NULL);
GO
DECLARE #Source varchar(5) = 'ABC01'; --Value to remove
WITH CTE AS(
SELECT S.ID,
STRING_AGG(NULLIF(SS.[value],#Source),',') WITHIN GROUP(ORDER BY S.ID) AS Categories
FROM dbo.Sources S
CROSS APPLY STRING_SPLIT(S.Categories,',') SS
GROUP BY S.ID)
UPDATE S
SET Categories = C.Categories
FROM dbo.Sources S
JOIN CTE C ON S.ID = C.ID;
GO
SELECT ID,
Categories
FROM dbo.Sources
GO
DROP TABLE dbo.Sources;
Although this seems like a bit overkill, compared to the REPLACE, it shows why normalising it is a far better idea in the first place, and how simple it is to actually do so.
You can use Replace as follows:
update dbo.sources set
category = replace(replace(category,'ABC01',''),', ','')
where category like '%ABC01%'
Context: SQL Server 2008
I have a table mytable which contains two NVARCHAR columns id, title.
All data in the id column are in fact numeric, except one row which contains the value 'test'.
I want to get all ids between 10 and 15 so I need SQL Server to convert id column values to INTEGER.
I use ISNUMERIC(id) = 1 to eliminate the non numeric values first but SQL Server is being rather weird with this query.
SELECT
in.*
FROM
(SELECT
id, title
FROM
mytable
WHERE
ISNUMERIC(id) = 1) in
WHERE
in.id BETWEEN 10 AND 15
This query causes the following error:
Conversion failed when converting the nvarchar value 'test' to
data type int.
The inner query eliminates the row with the 'test' id value so 'in' shouldn't contain it. Why is SQL Server still trying to convert it?
Am I missing something? How can I get around this?
P.S. I tried WHERE CAST(in.id AS INTEGER) BETWEEN 10 AND 15 but didn't work either.
Use TRY_CONVERT function, it's very handy.
SELECT id,
title
FROM mytable
where TRY_CONVERT(int, id) is not NULL
and TRY_CONVERT(int, id) BETWEEN 10 and 15
TRY_CONVERT returns null if the conversion fails.
And for you error, I suppose that the Query Optimizer messes something up here. Take a look at the execution plan, maybe it's filtering values between 10 and 15 at the first place. My solution will always work.
As the other commenter said in your case the BETWEEN function is done before ISNUMERIC. Here is a simple example:
select * into #temp2
from (
select 'test' a
union
select cast(1 as varchar) a
union
select cast(2 as varchar) a
union
select cast(3 as varchar) a
)z
SELECT a FROM (
SELECT a FROM #temp2 WHERE ISNUMERIC(a) = 1
) b
WHERE b.a BETWEEN 10 AND 15
This simple query is an alternative:
SELECT a
FROM #temp2
WHERE ISNUMERIC(a) = 1
and a BETWEEN 10 AND 15
I should add one more way with XML style:
SELECT *
FROM mytable
WHERE CAST(id as xml).value('. cast as xs:decimal?','int') BETWEEN 10 AND 15
Convert id to XML, convert the value in xs:decimal and then convert to integer. If there is not numeric value it will be converted into NULL.
Here you can read about XML type casting rules (link).
Could create a new field to do the search on:
Select id, title
from (Select id, title, case id when 'test' then null else cast(id as int) end as trueint from mytable) n
where n.trueint between 10 and 15
OR
Select id, title
from mytable
where case isnumeric(id) when 1 then cast(id as int) else null end between 10 and 15
One possibility is to force the engine to work this in two steps:
DECLARE #tbl TABLE(id NVARCHAR(MAX),title NVARCHAR(MAX));
INSERT INTO #tbl
SELECT id, title FROM mytable WHERE ISNUMERIC(id) = 1;
SELECT t.*
FROM #tbl t
WHERE CAST(t.id AS INT) BETWEEN 10 AND 15
I am having an issue with a query only in one particular environment/database even if the data is almost similar. Here is the simplified scenario:
Table A - One column - Id (long)
Id
1
2
3
Table B - Two columns - value(varchar) and field2(varchar)
Value Field2
1)abc NotKey
2)Test NotKey
3)1 Key
4)1.56 NotKey
When I run the query
select * from table a
where id in(select value from table b where Field2 = 'Key')
I get the error
Result: Conversion failed when converting the varchar value 'abc' (earlier I had this value erraneously as 'NotKey') to data type int.
on one database. In three other databases, the value returns correctly as "1".
I am using SQL Server 2008. What might be the issue here?
You gave the wrong filter for the filter that leads to the error.
The errror only happens when you select:
select * from tablea
where id in(select value from tableb where Field2 = 'NotKey')
You have to cast one of the columns
select * from tablea
where cast( id as nvarchar(20)) in(select value from tableb where Field2 = 'NotKey')
http://sqlfiddle.com/#!6/e9223/23
You must have a record in this instance that doesn't follow the same pattern as before. Try running the following query to find your bad data. You could either fix the record or add a numeric check to the query you're using.
select *
from table
where Field2 = 'Key'
and (ISNUMERIC(Value) = 0
OR CHARINDEX('.', Value) > 0);
Filtered query:
select *
from table a
where id in
(
select value
from table b
where Field2 = 'Key'
and ISNUMERIC(value) = 1
and CHARINDEX('.', Value) = 0
);
I'm refactoring very old code. Currently, PHP generates a separate select for every value. Say loc contains 1,2 and data contains a,b, it generates
select val from tablename where loc_id=1 and data_id=a;
select val from tablename where loc_id=1 and data_id=b;
select val from tablename where loc_id=2 and data_id=a;
select val from tablename where loc_id=2 and data_id=b;
...etc which all return either a single value or nothing. That meant I always had n(loc_id)*n(data_id) results, including nulls, which is necessary for subsequent processing. Knowing the order, this was used to generate an HTML table. Both data_id and loc_id can in theory scale up to a couple thousands (which is obviously not great in a table, but that's another concern).
+-----------+-----------+
| data_id 1 | data_id 2 |
+----------+-----------+-----------+
| loc_id 1 | - | 999.99 |
+----------+-----------+-----------+
+ loc_id 2 | 888.88 | - |
+----------+-----------+-----------+
To speed things up, I was looking at replacing this with a single query:
select val from tablename where loc_id in (1,2) and data_id in (a,b) order by loc_id asc, data_id asc;
to get a result like (below) and iterate to build my table.
Rownum VAL
------- --------
1 null
2 999.99
3 777.77
4 null
Unfortunately that approach drops the nulls from the resultset so I end up with
Rownum VAL
------- --------
1 999.99
2 777.77
Note that it is possible that neither data_id or loc_id have any match, in which case I would still need a null, null.
So I don't know which value matches which. I ways to match with the expected loc_id/data_id combination in php if I add loc_id and data_id... but that's getting messy.
Still a novice in SQL in general and that's absolutely the first time I work on PostgreSQL so hopefully that's not too obvious... As I post this I'm looking at two ways to solve this: any in array[] and joins. Will update if anything new is found.
tl;dr question
How do I do a where loc_id in (1,2) and data_id in (a,b) and keep the nulls so that I always get n(loc)*n(data) results?
You can achieve that in a single query with two steps:
Generate a matrix of all desired rows in the output.
LEFT [OUTER] JOIN to actual rows.
You get at least one row for every cell in your table.
If (loc_id, data_id) is unique, you get exactly one row.
SELECT t.val
FROM (VALUES (1), (2)) AS l(loc_id)
CROSS JOIN (VALUES ('a'), ('b')) AS d(data_id) -- generate total grid of rows
LEFT JOIN tablname t USING (loc_id, data_id) -- attach matching rows (if any)
ORDER BY l.loc_id, d.data_id;
Works for any number of columns with any number of values.
For your simple case:
SELECT t.val
FROM (
VALUES
(1, 'a'), (1, 'b')
, (2, 'a'), (2, 'b')
) AS ld (loc_id, data_id) -- total grid of rows
LEFT JOIN tablname t USING (loc_id, data_id) -- attach matching rows (if any)
ORDER BY ld.loc_id, ld.data_id;
where (loc_id in (1,2) or loc_id is null)
and (data_id in (a,b) or data_id is null)
Select the fields you use for filtering, so you know where the values came from:
select loc,data,val from tablename where loc in (1,2) and data in (a,b);
You won't get nulls this way either, but it's not a problem anymore. You know which fields are missing, and you know those are nulls.
I want to be able to select data from TableA where Field1 is greater than Field2 in TableB.
In my head i image it to be something like this
Select TableA.*
from TableA
Join TableB
On TableA.PK = TableB.FK
WHERE TableA.Field1 > TableB.Field2
I am using SQL server 2005 and the TableA.Field1 and tableB.Field2 look like:
2004102881010 - data type - Vrachar
My PK and FK look like:
0908232 - data type - nvarchar
The probelm is when this query is ran ALL the data is displaying and not just the rows where Field1 is greater.
Cheers:)
Seems to be working correctly for this demo code. Perhaps I'm not understanding the problem or data.
;
with TABLEA (PK, Field1) AS
(
-- Sample row that is filtered out
SELECT CAST('0908232' AS nvarchar(10)), CAST('2004102881010' AS varchar(50))
-- This is bigger than what's in B
UNION ALL SELECT CAST('0908232' AS nvarchar(10)), CAST('2005102881010' AS varchar(50))
)
, TABLEB(FK, Field2) AS
(
-- This matches row 1 above and will be excluded
SELECT CAST('0908232' AS nvarchar(10)), CAST('2004102881010' AS varchar(50))
)
SELECT TableA.*
FROM TableA
INNER JOIN TableB
ON TableA.PK = TableB.FK
WHERE TableA.Field1 > TableB.Field2
Results
PK Field1
0908232 2005102881010
This seems like a problem with missing zeroes:
20041028*0*81010
There is nothing wrong with your query, but your data.
Consider 2001-01-01 01:01:01, this would be seen as: 200111111
It should be seen as: 20010101010101
Comparrison operators (>, <) used on strings (varchars, nvarchars, etc.) work alphabetically. For example, '9' > '11' is true. You might try doing a data type conversion...
WHERE cast(A.field1 as int) > cast(B.field2 as int)