For use in React Native,
Suppose I have a SQLite database table contains col1 primary, col2.
Where col1 contains a serial number and col2 contains a JSON like
col1 : col2
1 : {"id":"id1", "value":"value1"},
2 : {"id":"id2", "value":"value2, value3"},
3 : {"id":"id3", "value":"value4, value5"}
and I just want to extract those unique value using SQLite query,
output I expect is : ["value1","value2","value3","value4","value"]
You can do it with json_extract():
select group_concat(json_extract(col2, '$.value'), ', ') result
from tablename
Result:
> result
> -------------------------------------
> value1, value2, value3, value4, value5
Or if you want the result formatted as a json array use json_group_array():
select replace(json_group_array(json_extract(col2, '$.value')), ', ', '","') result
from tablename
Result:
> | result |
> | :--------------------------------------------- |
> | ["value1","value2","value3","value4","value5"] |
If you can' use the JSON1 Extension then you can do it with string functions:
select group_concat(substr(
col2,
instr(col2, '"value":"') + length('"value":"'),
length(col2) - (instr(col2, '"value":"') + length('"value":"') + 1)
), ', ') result
from tablename
Result:
> result
> -------------------------------------
> value1, value2, value3, value4, value5
See the demo.
Related
I have the following table that holds a column which is in XML:
Id
Label
Details
1
Test1
<terms><destination><email>email11#foo.com</email><email>email12#foo.com</email></destination><content>blabla</content></terms>
2
Test2
<terms><destination><email>email21#foo.com</email><email>email22#foo.com</email></destination><content>blabla</content></terms>
I would like a query that produces the following output:
Id
Label
Destination
1
Test1
email11#foo.com, email12#foo.com
2
Test2
email21#foo.com, email22#foo.com
Any clue on how I can concat the XML email node values as a column along the related columns (Id and Label)?
Thanks ahead
select ID, Label,
stuff(
details.query('for $step in /terms/destination/email/text() return concat(", ", string($step))')
.value('.', 'nvarchar(max)'),
1, 2, '')
from #tbl;
Please try the following solution.
Because DDL and sample data population were not provided,
assumption is that the Details column is of XML data type.
SQL
-- DDL and sample data population, start
DECLARE #tbl TABLE (ID INT IDENTITY PRIMARY KEY, Label VARCHAR(20), Details XML);
INSERT INTO #tbl (Label, Details) VALUES
('Test1',N'<terms><destination><email>email11#foo.com</email><email>email12#foo.com</email></destination><content>blabla</content></terms>'),
('Test2',N'<terms><destination><email>email21#foo.com</email><email>email22#foo.com</email></destination><content>blabla</content></terms>');
-- DDL and sample data population, end
SELECT ID, Label
, REPLACE(Details.query('data(/terms/destination/email/text())').value('.','VARCHAR(MAX)'), SPACE(1), ', ') AS Destination
FROM #tbl;
Output
+----+-------+----------------------------------+
| ID | Label | Destination |
+----+-------+----------------------------------+
| 1 | Test1 | email11#foo.com, email12#foo.com |
| 2 | Test2 | email21#foo.com, email22#foo.com |
+----+-------+----------------------------------+
Let's say I have a table 'T1' that contains 3 records like this:
A B
101,103,115,189 NAME1
101,115 NAME2
102,116 NAME3
and now I have to find all the rows from field A that contains (101,102,115) which should be NAME1, NAME2 and NAME3.
Since there are over 100000 rows, I need to find an effective way to do this.
Very much appreciate for any kind help.
I'm using SQL Server 2014.
Solution:
I create a third table to maintain the relationship between Job and Category table, the final query should be like this:
SELECT * FROM Job WHERE Job_Id IN (
SELECT DISTINCT(Job_Id) FROM Job_RS_Category WHERE Category_Id in (100015,100054,100060,100062,100063,100068,100070,100072,100073,100081,100096,100099))
SQL Fiddle
MS SQL Server 2017 Schema Setup:
CREATE TABLE Testdata
(
SomeID INT,
A VARCHAR(MAX),
B VARCHAR(MAX)
)
INSERT Testdata SELECT 1, '101,103,115,189', 'NAME1'
INSERT Testdata SELECT 2, '101,115' ,'NAME2'
INSERT Testdata SELECT 3, '102,116' , 'NAME3'
Query 1:
;WITH tmp(SomeID, B, DataItem, A) AS
(
SELECT
SomeID,
B,
LEFT(A, CHARINDEX(',', A + ',') - 1),
STUFF(A, 1, CHARINDEX(',', A + ','), '')
FROM Testdata
UNION all
SELECT
SomeID,
B,
LEFT(A, CHARINDEX(',', A + ',') - 1),
STUFF(A, 1, CHARINDEX(',', A + ','), '')
FROM tmp
WHERE
A > ''
)
select distinct B from tmp where DataItem IN ('101','102','115')
Results:
| B |
|-------|
| NAME1 |
| NAME2 |
| NAME3 |
How to remove extension dates in SQL server?
FileName | id
-------------------------+---
c:\abc_20181008.txt | 1
c:\xyz_20181007.dat | 2
c:\abc_xyz_20181007.dat | 3
c:\ab.xyz_20181007.txt | 4
Based on above data I want output like below :
Table: emp
FileName | id
-------------------+---
c:\abc.txt | 1
c:\xyz.dat | 2
c:\abc_xyz.dat | 3
c:\ab.xyz.txt | 4
I have tried like this:
select
substring (Filename, replace(filename, '.', ''), len(filename)), id
from
emp
But this query is not returning the expected result in SQL Server.
Please tell me how to write a query to achieve this task in SQL Server.
You can use the following query:
SELECT id, filename,
LEFT(filename, LEN(filename) - i1) + RIGHT(filename, i2 - 1)
FROM emp
CROSS APPLY
(
SELECT CHARINDEX('_', REVERSE(filename)) AS i1,
PATINDEX('%[0-9]%', REVERSE(filename)) AS i2
) AS x
Demo here
You can try this as well:
declare #t table (a varchar(50))
insert into #t values ('c:\abc_20181008.txt')
insert into #t values ('c:\abc_xyz_20181007.dat')
insert into #t values ('c:\ab.xyz_20181007.txt')
insert into #t values ('c:\ab.xyz_20182007.txt')
select replace(SUBSTRING(a,1,CHARINDEX('2',a) - 1) + SUBSTRING(a,len(a)-3,LEN(a)),'_.','.') from #t
I have a string like € 1.580 (1580 €).
By doing the following statement the number become:
select to_number(SUBSTR(col1,2)) from TAB1 where riga = 2;
Number Result: 1.58
I want to keep the 0 because the real number is 1580 and I want to display 1.580. How can I do that?
Thank you.
A couple of options using the NLS_NUMERIC_CHARACTERS and NLS_CURRENCY options with TO_NUMBER:
SQL Fiddle
Oracle 11g R2 Schema Setup:
CREATE TABLE tab1 (
col1 VARCHAR2( 200 )
);
INSERT INTO tab1 VALUES ( '€ 1.580' );
Query 1:
SELECT TO_NUMBER(
SUBSTR( col1, 2 ),
'999G999',
'NLS_NUMERIC_CHARACTERS='',.'''
) AS value1,
TO_NUMBER(
REPLACE( col1, ' ' ),
'L999G999',
'NLS_CURRENCY=''€'' NLS_NUMERIC_CHARACTERS='',.'''
) AS value2
FROM tab1
Results:
| VALUE1 | VALUE2 |
|--------|--------|
| 1580 | 1580 |
If you want to re-format it as a string then, just use TO_CHAR again:
Query 2:
SELECT TO_CHAR(
TO_NUMBER(
REPLACE( col1, ' ' ),
'L999G999',
'NLS_CURRENCY=''€'' NLS_NUMERIC_CHARACTERS='',.'''
),
'FM999G999',
'NLS_NUMERIC_CHARACTERS='',.'''
)AS value2
FROM tab1
Results:
| VALUE2 |
|--------|
| 1.580 |
But you ought to be storing the value as a number and whenever you want to format it as a currency then just use TO_CHAR( value, 'L999G999' ) with the appropriate NLS options and do NOT store it as a formatted string.
if you have fixed number of digit then you need to use like this
select Convert(numeric(18,3),to_number(SUBSTR(col1,2))) from TAB1 where riga = 2;
Number Result: 1.580
Or
you can put in your to_number function of returns with numeric(18,3)
I'm pulling data from an API in JSON with a format like the example data below. Where essentially every "row" is an array of values. The API doc defines the columns and their types in advance. So I know the col1 is, for example, a varchar, and that col2 is an int.
CREATE TEMP TABLE dat (data json);
INSERT INTO dat
VALUES ('{"COLUMNS":["col1","col2"],"DATA":[["a","1"],["b","2"]]}');
I want to transform this within PostgreSQL 9.3 such that I end up with:
col1 | col2
------------
a | 1
b | 2
Using json_array_elements I can get to:
SELECT json_array_elements(data->'DATA')
FROM dat
json_array_elements
json
---------
["a","1"]
["b","2"]
but then I can't figure out how to do either convert the JSON array to a PostgreSQL array so I can perform something like unnest(ARRAY['a','1'])
General case for unknown columns
To get a result like
col1 | col2
------------
a | 1
b | 2
will require a bunch of dynamic SQL, because you don't know the types of the columns in advance, nor the column names.
You can unpack the json with something like:
SELECT
json_array_element_text(colnames, colno) AS colname,
json_array_element_text(colvalues, colno) AS colvalue,
rn,
idx,
colno
FROM (
SELECT
data -> 'COLUMNS' AS colnames,
d AS colvalues,
rn,
row_number() OVER () AS idx
FROM (
SELECT data, row_number() OVER () AS rn FROM dat
) numbered
cross join json_array_elements(numbered.data -> 'DATA') d
) elements
cross join generate_series(0, json_array_length(colnames) - 1) colno;
producing a result set like:
colname | colvalue | rn | idx | colno
---------+----------+----+-----+-------
col1 | a | 1 | 1 | 0
col2 | 1 | 1 | 1 | 1
col1 | b | 1 | 2 | 0
col2 | 2 | 1 | 2 | 1
(4 rows)
You can then use this as input to the crosstab function from the tablefunc module with something like:
SELECT * FROM crosstab('
SELECT
to_char(rn,''00000000'')||''_''||to_char(idx,''00000000'') AS rowid,
json_array_element_text(colnames, colno) AS colname,
json_array_element_text(colvalues, colno) AS colvalue
FROM (
SELECT
data -> ''COLUMNS'' AS colnames,
d AS colvalues,
rn,
row_number() OVER () AS idx
FROM (
SELECT data, row_number() OVER () AS rn FROM dat
) numbered
cross join json_array_elements(numbered.data -> ''DATA'') d
) elements
cross join generate_series(0, json_array_length(colnames) - 1) colno;
') results(rowid text, col1 text, col2 text);
producing:
rowid | col1 | col2
---------------------+------+------
00000001_ 00000001 | a | 1
00000001_ 00000002 | b | 2
(2 rows)
The column names are not retained here.
If you were on 9.4 you could avoid the row_number() calls and use WITH ORDINALITY, making it much cleaner.
Simplified with fixed, known columns
Since you apparently know the number of columns and their types in advance the query can be considerably simplified.
SELECT
col1, col2
FROM (
SELECT
rn,
row_number() OVER () AS idx,
elem ->> 0 AS col1,
elem ->> 1 :: integer AS col2
FROM (
SELECT data, row_number() OVER () AS rn FROM dat
) numbered
cross join json_array_elements(numbered.data -> 'DATA') elem
ORDER BY 1, 2
) x;
result:
col1 | col2
------+------
a | 1
b | 2
(2 rows)
Using 9.4 WITH ORDINALITY
If you were using 9.4 you could keep it cleaner using WITH ORDINALITY:
SELECT
col1, col2
FROM (
SELECT
elem ->> 0 AS col1,
elem ->> 1 :: integer AS col2
FROM
dat
CROSS JOIN
json_array_elements(dat.data -> 'DATA') WITH ORDINALITY AS elements(elem, idx)
ORDER BY idx
) x;
this code worked fine for me, maybe it be useful for someone.
select to_json(array_agg(t))
from (
select text, pronunciation,
(
select array_to_json(array_agg(row_to_json(d)))
from (
select part_of_speech, body
from definitions
where word_id=words.id
order by position asc
) d
) as definitions
from words
where text = 'autumn'
) t
Credits:
https://hashrocket.com/blog/posts/faster-json-generation-with-postgresql