How do I modify the column having 2,4,5 as values to 2=active, 4=closd, and 5=inactv in SSIS derived column? (SQL Server DB)
I'm expecting the column should show values.
'active' instead of '2'
'closd' instead of '4'
'inactv' instead of '5'
your data
declare #a table( c varchar(100))
insert into #a
(c) values
('2,4,5');
declare #equivalenttable table( id varchar(100),equivalent varchar(100) )
insert into #equivalenttable
(id,equivalent) values
('2' ,'active'),
('4' ,'closd'),
('5' ,'inactv');
first use string_split and Cross apply for split string,
second use join with equivalent table and
third use string_agg as follows:
select
string_agg(equivalent, ',') within group (
order by
id
) c
from
(
select
a1.value
from
#a CROSS APPLY STRING_SPLIT(c, ',') a1
) a2
join #equivalenttable e on a2.value = e.id
dbfiddle
You should use the conditional operator (?:) as follows:
[inputcolumn] == "2" ? "active" :
[inputcolumn] == "4" ? "closed" :
[inputcolumn] == "5" ? "inactive" :
""
My suggestion is to stay away from derived columns and implement the case statement in the database query. Firstly it offloads the execution to the database. Secondly, derived columns are not that easy to work with and we want to keep the number of derived columns (or any number of ssis artifacts for that matter) as low as possible.
Related
I'm tasked with importing data into SQL thats pretty much JSON but not quite . I've used OPENROWSET/OPENJSON to import into a staging table and the data looks like this
What I need to achieve is migrate that to a single table with the following structure
I'm having no success , I even trying updating the data in the staging table to look like this and import but no joy.
My current attempt:
SELECT A.[DATE], A.[VALUE]
FROM OPENJSON(#JSON) AS I
CROSS APPLY (
SELECT *
FROM OPENJSON (#JSON) WITH (
[DATE] NVARCHAR(MAX) '$.DATE',
[VALUE] NVARCHAR(MAX) '$.VALUE'
)
) A OUTPUT
Any recommendations ?
Just use this way:
CREATE TABLE #tmp (
instance NVARCHAR(50),
json NVARCHAR(1000)
)
INSERT #tmp
VALUES
( N'server1.com',
N'[{"date":10000, "value":"6"},{"date":20000, "value":"8"}]'
)
SELECT
t.instance, Date,Value
FROM #tmp t
OUTER APPLY OPENJSON(t.json)
WITH (
Date varchar(200) '$.date' ,
Value VARCHAR(100) '$.value'
)
For your first set of data, you have a doubly-nested JSON array, so you need to use OPENJSON to break open the outer one first:
SELECT
instance
JSON_VALUE(j1.innerArray, '$[0]') AS date,
JSON_VALUE(j1.innerArray, '$[1]') AS value
FROM table t
CROSS APPLY OPENJSON(t.json) WITH (
innerArray nvarchar(max) '$' AS JSON
) j1
For the second version, just change the JSON_VALUE parameters:
JSON_VALUE(j1.innerArray, '$.date') AS date,
JSON_VALUE(j1.innerArray, '$.value') AS value
Original answer:
The reason for the unexpected result is the fact, that the you have nested JSON arrays, but the WITH clause is not correct. You need to use the appropriate WITH clause, like the statement below:
Table:
SELECT *
INTO Data
FROM (VALUES
('server1.com', '[[1613347200, "7"], [1613347205, "8"], [1613347202, "9"]]'),
('server2.com', '[[1613317200, "3"], [1613347215, "2"], [1613347212, "1"]]')
) v (instance, array)
Statement:
SELECT d.instance, j.[date], j.[value]
FROM Data d
OUTER APPLY OPENJSON(d.array) WITH (
[date] numeric(10, 0) '$[0]',
[value] varchar(1) '$[1]'
) j
Result:
instance date value
-----------------------------
server1.com 1613347200 7
server1.com 1613347205 8
server1.com 1613347202 9
server2.com 1613317200 3
server2.com 1613347215 2
server2.com 1613347212 1
Update:
Your second attempt is almost correct. The reason for the NULL values is the fact, that the path part of the columns definitions in the WITH clause is case sensitive:
SELECT d.instance, j.[date], j.[value]
FROM (VALUES
('server1.com', '[{"date":1613347200, "value":"7"}, {"date":1613347200, "value":"8"}]')
) d (instance, array)
OUTER APPLY OPENJSON(d.array) WITH (
[date] numeric(10, 0) '$.date',
[value] varchar(1) '$.value'
) j
I have a function that takes primary keys and separates them with commas.
Oracle function:
create or replace function split(
list in CHAR,
delimiter in CHAR default ','
)
return split_tbl as
splitted split_tbl := split_tbl();
i pls_integer := 0;
list_ varchar2(32767) := list;
begin
loop
i := instr(list_, delimiter);
if i > 0 then
splitted.extend(1);
splitted(splitted.last) := substr(list_, 1, i - 1);
list_ := substr(list_, i + length(delimiter));
else
splitted.extend(1);
splitted(splitted.last) := list_;
return splitted;
end if;
end loop;
end;
and I have this query in SQL Server that returns the data of this query in the function table
select maxUserSalary.id as 'UserSalary'
into #usersalary
from dbo.Split(#usersalary,';') as userid
cross apply (
select top 1 * from User_Salaryas usersalary
where usersalary.User_Id= userid.item
order by usersalary.Date desc
) as maxUserSalary
The problem is, I'm not able to use cross apply in Oracle to throw this data into this function that is returning a table.
How can I use cross apply with Oracle to return this data in function?
You're using Oracle 18c so you can use the CROSS APPLY syntax. Oracle added it (as well as LATERAL and OUTER APPLY ) in 12c.
Here is a simplified version of your logic:
select us.name
, us.salary
from table(split('FOX IN SOCKS,THING ONE,THING TWO')) t
cross apply (select us.name, max(us.salary) as salary
from user_salaries us
where us.name = t.column_value ) us
There is a working demo on db<>fiddle .
If this doesn't completely solve your problem please post a complete question with table structures, sample data and expected output derived from that sample.
I think APC answered your direct question well. As a side note, I wanted to suggest NOT writing your own function to do this at all. There are several existing solutions to split delimited string values into virtual tables that don't require you to create your own custom types, and don't have the performance overhead of context switching between the SQL and PL/SQL engines.
-- example data - remove this to test with your User_Salary table
with User_Salary as (select 1 as id, 'A' as user_id, sysdate as "Date" from dual
union select 2, 'B', sysdate from dual)
-- your query:
select maxUserSalary.id as "UserSalary"
from (select trim(COLUMN_VALUE) as item
from xmltable(('"'||replace(:usersalary, ';', '","')||'"'))) userid -- note ';' delimiter
cross apply (
select * from User_Salary usersalary
where usersalary.User_Id = userid.item
order by usersalary."Date" desc
fetch first 1 row only
) maxUserSalary;
If you run this and pass in 'A;B;C' for :usersalary, you'll get 1 and 2 back.
A few notes:
In this example, I'm using ; as the delimiter, since that's what your query used.
I tried to match your table/column names, but your column name Date is invalid - it's an Oracle reserved keyword, so it has to be put in quotes to be a valid column name.
As a column identifier, "UserSalary" should also have double quotes, not single.
You can't use as in table aliases.
I removed into usersalary, since into is only used with queries which return a single row, and your query can return multiple rows.
In our SQL Server table we have a json object stored with an array of strings. I want to programatically split that string into several columns. However, I cannot seem to get it to work or even if it's possible.
Is this a possibility to create multiple columns within the WITH clause or it is a smarter move to do it within the select statement?
I trimmed down some of the code to give a simplistic idea of what's given.
The example JSON is similar to { "arr": ["str1 - str2"] }
SELECT b.* FROM [table] a
OUTER APPLY
OPENJSON(a.value, '$.arr')
WITH
(
strSplit1 VARCHAR(100) SPLIT('$.arr', '-',1),
strSplit2 VARCHAR(100) SPLIT('$.arr', '-',2)
) b
Due to the tag [tsql] and the usage of OPENJSON I assume this is SQL-Server. But might be wrong... Please always specify your RDBMS (with version).
Your JSON is rather weird... I think you've overdone it while trying to simplify this for brevity...
Try this:
DECLARE #tbl TABLE(ID INT IDENTITY,YourJSON NVARCHAR(MAX));
INSERT INTO #tbl VALUES(N'{ "arr": ["str1 - str2"] }') --weird example...
,(N'{ "arr": ["a","b","c"] }'); --array with three elements
SELECT t.ID
,B.[value] AS arr
FROM #tbl t
CROSS APPLY OPENJSON(YourJSON)
WITH(arr NVARCHAR(MAX) AS JSON) A
CROSS APPLY OPENJSON(A.arr) B;
A rather short approach (but fitting to this simple example only) was this:
SELECT t.ID
,A.*
FROM #tbl t
OUTER APPLY OPENJSON(JSON_QUERY(YourJSON,'$.arr')) A
Hint
JSON support was introduced with SQL-Server 2016
UPDATE: If the JSON's content is a weird CSV-string...
There's a trick to transform a CSV into a JSON-array. Try this
DECLARE #tbl TABLE(ID INT IDENTITY,YourJSON NVARCHAR(MAX));
INSERT INTO #tbl VALUES(N'{ "arr": ["str1 - str2"] }') --weird example...
,(N'{ "arr": ["a","b","c"] }') --array with three elements
,(N'{ "arr": ["x-y-z"] }'); --array with three elements in a weird CSV format
SELECT t.ID
,B.[value] AS arr
,C.[value]
FROM #tbl t
CROSS APPLY OPENJSON(YourJSON)
WITH(arr NVARCHAR(MAX) AS JSON) A
CROSS APPLY OPENJSON(A.arr) B
CROSS APPLY OPENJSON('["' + REPLACE(B.[value],'-','","') + '"]') C;
Some simple replacements in OPENJSON('["' + REPLACE(B.[value],'-','","') + '"]') will create a JSON array out of your CSV-string, which can be opened in OPENJSON.
I'm not aware of any way to split a string within JSON. I wonder if the issue is down to your JSON containing a single string rather than multiple values?
The below example shows how to extract each string from the array; and if you wish to go further and split those strings on the hyphen, shows how to do that using SQL's normal SUBSTRING and CHARINDEX functions.
create table [table]
(
value nvarchar(max)
)
insert [table](value)
values ('{ "arr": ["str1 - str2"] }'), ('{ "arr": ["1234 - 5678","abc - def"] }')
SELECT b.value
, rtrim(substring(b.value,1,charindex('-',b.value)-1))
, ltrim(substring(b.value,charindex('-',b.value)+1,len(b.value)))
FROM [table] a
OUTER APPLY OPENJSON(a.value, '$.arr') b
If you want all values in a single column, you can use the string_split function: https://learn.microsoft.com/en-us/sql/t-sql/functions/string-split-transact-sql?view=sql-server-2017
SELECT ltrim(rtrim(c.value))
FROM [table] a
OUTER APPLY OPENJSON(a.value, '$.arr') b
OUTER APPLY STRING_SPLIT(b.value, '-') c
I know we can use LIKE for pattern matching, however, here is what want to do.
I have a table, which has a column, 'Pattern', the values are like:
host1%
%host2
....
I have another table, which has a column, 'Host'. The question is: how can I check whether the values in 'Host' table do not match any patterns in 'Pattern'?
If it is too complex, then a simplified question is: How can I check whether the values in 'Host' do not StartWith any strings in 'Pattern'?
We can use loop, but is there a better way? ideally, it should work for ql server 2008, but latest version will do.
thanks
Use where not exists followed by a subquery which checks each pattern against the current row of the table containing your data. i.e.
where not exists
(
select top 1 1
from #patterns p
where d.datum like p.pattern
)
Full Code for Working Example: SQL Fiddle
declare #patterns table
(
pattern nvarchar(16) not null
)
declare #data table
(
datum nvarchar(16) not null
)
insert #patterns
values ('host1%')
,('%host2')
insert #data
values ('host1234')
, ('234host1')
, ('host2345')
, ('345host2')
select *
from #data d
where not exists
(
select top 1 1
from #patterns p
where d.datum like p.pattern
)
select t1.host
from table_1 t1
left join table_2 t2 on t1.host like t2.pattern
where t2.pattern is null
I want to write a query to see if a category field is within a certain range. The problem is the field can contain null, text or numeric text prefixed by '#' character.
Does anybody know of SQL that will strip the non numerics and allow me to do the following check.
category > 1 and category < 100
Here is a sample of what the field category can contain:
#230.1
#200
Null
text
I am using SQL Server 2000
I appears astander's solution is functional. You should consider however a few points:
If the table holds more than a few thousand rows, and if this type of query is to be run frequently, it may be beneficial to introduce a new column to hold the numeric value of the category (if available, null otherwise). This will be more efficient for two reasons: as written, SQL needs to scan the table, completely, i.e.it needs to review every single row; also it needs to perform all these conversion which are a bit expensive, CPU-wise.
You may consider introducing some extra logic to normalize the category field. For example to get rid of common leading or trailing characters etc. This will "rescue" several category codes which would otherwise translate to null wouldn't be able to participate in these filters.
Try something like this
DECLARE #Table TABLE(
Val VARCHAR(200)
)
INSERT INTO #Table (Val) SELECT '#230.1'
INSERT INTO #Table (Val) SELECT '#200'
INSERT INTO #Table (Val) SELECT '210'
INSERT INTO #Table (Val) SELECT NULL
INSERT INTO #Table (Val) SELECT 'text'
SELECT *
FROM (
SELECT CASE
WHEN ISNUMERIC(Val) = 1
THEN CAST(Val AS FLOAT)
WHEN LEN(Val) > 1 AND ISNUMERIC(RIGHT(Val,LEN(Val)-1)) = 1
THEN CAST(RIGHT(Val,LEN(Val)-1) AS FLOAT)
END Num
FROM #Table
WHERE Val IS NOT NULL
AND (
ISNUMERIC(Val) = 1
OR (
LEN(Val) > 1
AND ISNUMERIC(RIGHT(Val,LEN(Val)-1)) = 1
)
)
) Numbers
WHERE Num BETWEEN 205 AND 230