I have a string like € 1.580 (1580 €).
By doing the following statement the number become:
select to_number(SUBSTR(col1,2)) from TAB1 where riga = 2;
Number Result: 1.58
I want to keep the 0 because the real number is 1580 and I want to display 1.580. How can I do that?
Thank you.
A couple of options using the NLS_NUMERIC_CHARACTERS and NLS_CURRENCY options with TO_NUMBER:
SQL Fiddle
Oracle 11g R2 Schema Setup:
CREATE TABLE tab1 (
col1 VARCHAR2( 200 )
);
INSERT INTO tab1 VALUES ( '€ 1.580' );
Query 1:
SELECT TO_NUMBER(
SUBSTR( col1, 2 ),
'999G999',
'NLS_NUMERIC_CHARACTERS='',.'''
) AS value1,
TO_NUMBER(
REPLACE( col1, ' ' ),
'L999G999',
'NLS_CURRENCY=''€'' NLS_NUMERIC_CHARACTERS='',.'''
) AS value2
FROM tab1
Results:
| VALUE1 | VALUE2 |
|--------|--------|
| 1580 | 1580 |
If you want to re-format it as a string then, just use TO_CHAR again:
Query 2:
SELECT TO_CHAR(
TO_NUMBER(
REPLACE( col1, ' ' ),
'L999G999',
'NLS_CURRENCY=''€'' NLS_NUMERIC_CHARACTERS='',.'''
),
'FM999G999',
'NLS_NUMERIC_CHARACTERS='',.'''
)AS value2
FROM tab1
Results:
| VALUE2 |
|--------|
| 1.580 |
But you ought to be storing the value as a number and whenever you want to format it as a currency then just use TO_CHAR( value, 'L999G999' ) with the appropriate NLS options and do NOT store it as a formatted string.
if you have fixed number of digit then you need to use like this
select Convert(numeric(18,3),to_number(SUBSTR(col1,2))) from TAB1 where riga = 2;
Number Result: 1.580
Or
you can put in your to_number function of returns with numeric(18,3)
Related
For use in React Native,
Suppose I have a SQLite database table contains col1 primary, col2.
Where col1 contains a serial number and col2 contains a JSON like
col1 : col2
1 : {"id":"id1", "value":"value1"},
2 : {"id":"id2", "value":"value2, value3"},
3 : {"id":"id3", "value":"value4, value5"}
and I just want to extract those unique value using SQLite query,
output I expect is : ["value1","value2","value3","value4","value"]
You can do it with json_extract():
select group_concat(json_extract(col2, '$.value'), ', ') result
from tablename
Result:
> result
> -------------------------------------
> value1, value2, value3, value4, value5
Or if you want the result formatted as a json array use json_group_array():
select replace(json_group_array(json_extract(col2, '$.value')), ', ', '","') result
from tablename
Result:
> | result |
> | :--------------------------------------------- |
> | ["value1","value2","value3","value4","value5"] |
If you can' use the JSON1 Extension then you can do it with string functions:
select group_concat(substr(
col2,
instr(col2, '"value":"') + length('"value":"'),
length(col2) - (instr(col2, '"value":"') + length('"value":"') + 1)
), ', ') result
from tablename
Result:
> result
> -------------------------------------
> value1, value2, value3, value4, value5
See the demo.
I have a Table Animals
Id | Name | Count | -- (other columns not relevant)
1 | horse | 11
2 | giraffe | 20
I want to try to insert or update values from a CSV string
Is it possible to do something like the following in 1 query?
;with results as
(
select * from
(
values ('horse'), ('giraffe'), ('lion')
)
animal_csv(aName)
left join animals on
animals.[Name] = animal_csv.aName
)
update results
set
[Count] = 1 + animals.[Count]
-- various other columns are set here
where Id is not null
--else
--insert into results ([Name], [Count]) values (results.aName, 1)
-- (essentially Where id is null)
It looks like what you're looking for is a table variable or temporary table rather than a common table expression.
If I understand your problem correctly, you are building a result set based on data you're getting from a CSV, merging it by incrementing values, and then returning that result set.
As I read your code, it looks as if your results would look like this:
aName | Id | Name | Count
horse | 1 | horse | 12
giraffe | 2 | giraffe | 21
lion | | |
I think what you're looking for in your final result set is this:
Name | Count
horse | 12
giraffe | 21
lion | 1
First, you can get from your csv and table to a resultset in a single CTE statement:
;WITH animal_csv AS (SELECT * FROM (VALUES('horse'),('giraffe'), ('lion')) a(aName))
SELECT ISNULL(Name, aName) Name
, CASE WHEN [Count] IS NULL THEN 1 ELSE 1 + [Count] END [Count]
FROM animal_csv
LEFT JOIN animals
ON Name = animal_csv.aName
Or, if you want to build your resultset using a table variable:
DECLARE #Results TABLE
(
Name VARCHAR(30)
, Count INT
)
;WITH animal_csv AS (SELECT * FROM (VALUES('horse'),('giraffe'), ('lion')) a(aName))
INSERT #Results
SELECT ISNULL(Name, aName) Name
, CASE WHEN [Count] IS NULL THEN 1 ELSE 1 + [Count] END [Count]
FROM animal_csv
LEFT JOIN animals
ON Name = animal_csv.aName
SELECT * FROM #results
Or, if you just want to use a temporary table, you can build it like this (temp tables are deleted when the connection is released/closed or when they're explicitly dropped):
;WITH animal_csv AS (SELECT * FROM (VALUES('horse'),('giraffe'), ('lion')) a(aName))
SELECT ISNULL(Name, aName) Name
, CASE WHEN [Count] IS NULL THEN 1 ELSE 1 + [Count] END [Count]
INTO #results
FROM animal_csv
LEFT JOIN animals
ON Name = animal_csv.aName
SELECT * FROM #results
How to remove extension dates in SQL server?
FileName | id
-------------------------+---
c:\abc_20181008.txt | 1
c:\xyz_20181007.dat | 2
c:\abc_xyz_20181007.dat | 3
c:\ab.xyz_20181007.txt | 4
Based on above data I want output like below :
Table: emp
FileName | id
-------------------+---
c:\abc.txt | 1
c:\xyz.dat | 2
c:\abc_xyz.dat | 3
c:\ab.xyz.txt | 4
I have tried like this:
select
substring (Filename, replace(filename, '.', ''), len(filename)), id
from
emp
But this query is not returning the expected result in SQL Server.
Please tell me how to write a query to achieve this task in SQL Server.
You can use the following query:
SELECT id, filename,
LEFT(filename, LEN(filename) - i1) + RIGHT(filename, i2 - 1)
FROM emp
CROSS APPLY
(
SELECT CHARINDEX('_', REVERSE(filename)) AS i1,
PATINDEX('%[0-9]%', REVERSE(filename)) AS i2
) AS x
Demo here
You can try this as well:
declare #t table (a varchar(50))
insert into #t values ('c:\abc_20181008.txt')
insert into #t values ('c:\abc_xyz_20181007.dat')
insert into #t values ('c:\ab.xyz_20181007.txt')
insert into #t values ('c:\ab.xyz_20182007.txt')
select replace(SUBSTRING(a,1,CHARINDEX('2',a) - 1) + SUBSTRING(a,len(a)-3,LEN(a)),'_.','.') from #t
I'm pulling data from an API in JSON with a format like the example data below. Where essentially every "row" is an array of values. The API doc defines the columns and their types in advance. So I know the col1 is, for example, a varchar, and that col2 is an int.
CREATE TEMP TABLE dat (data json);
INSERT INTO dat
VALUES ('{"COLUMNS":["col1","col2"],"DATA":[["a","1"],["b","2"]]}');
I want to transform this within PostgreSQL 9.3 such that I end up with:
col1 | col2
------------
a | 1
b | 2
Using json_array_elements I can get to:
SELECT json_array_elements(data->'DATA')
FROM dat
json_array_elements
json
---------
["a","1"]
["b","2"]
but then I can't figure out how to do either convert the JSON array to a PostgreSQL array so I can perform something like unnest(ARRAY['a','1'])
General case for unknown columns
To get a result like
col1 | col2
------------
a | 1
b | 2
will require a bunch of dynamic SQL, because you don't know the types of the columns in advance, nor the column names.
You can unpack the json with something like:
SELECT
json_array_element_text(colnames, colno) AS colname,
json_array_element_text(colvalues, colno) AS colvalue,
rn,
idx,
colno
FROM (
SELECT
data -> 'COLUMNS' AS colnames,
d AS colvalues,
rn,
row_number() OVER () AS idx
FROM (
SELECT data, row_number() OVER () AS rn FROM dat
) numbered
cross join json_array_elements(numbered.data -> 'DATA') d
) elements
cross join generate_series(0, json_array_length(colnames) - 1) colno;
producing a result set like:
colname | colvalue | rn | idx | colno
---------+----------+----+-----+-------
col1 | a | 1 | 1 | 0
col2 | 1 | 1 | 1 | 1
col1 | b | 1 | 2 | 0
col2 | 2 | 1 | 2 | 1
(4 rows)
You can then use this as input to the crosstab function from the tablefunc module with something like:
SELECT * FROM crosstab('
SELECT
to_char(rn,''00000000'')||''_''||to_char(idx,''00000000'') AS rowid,
json_array_element_text(colnames, colno) AS colname,
json_array_element_text(colvalues, colno) AS colvalue
FROM (
SELECT
data -> ''COLUMNS'' AS colnames,
d AS colvalues,
rn,
row_number() OVER () AS idx
FROM (
SELECT data, row_number() OVER () AS rn FROM dat
) numbered
cross join json_array_elements(numbered.data -> ''DATA'') d
) elements
cross join generate_series(0, json_array_length(colnames) - 1) colno;
') results(rowid text, col1 text, col2 text);
producing:
rowid | col1 | col2
---------------------+------+------
00000001_ 00000001 | a | 1
00000001_ 00000002 | b | 2
(2 rows)
The column names are not retained here.
If you were on 9.4 you could avoid the row_number() calls and use WITH ORDINALITY, making it much cleaner.
Simplified with fixed, known columns
Since you apparently know the number of columns and their types in advance the query can be considerably simplified.
SELECT
col1, col2
FROM (
SELECT
rn,
row_number() OVER () AS idx,
elem ->> 0 AS col1,
elem ->> 1 :: integer AS col2
FROM (
SELECT data, row_number() OVER () AS rn FROM dat
) numbered
cross join json_array_elements(numbered.data -> 'DATA') elem
ORDER BY 1, 2
) x;
result:
col1 | col2
------+------
a | 1
b | 2
(2 rows)
Using 9.4 WITH ORDINALITY
If you were using 9.4 you could keep it cleaner using WITH ORDINALITY:
SELECT
col1, col2
FROM (
SELECT
elem ->> 0 AS col1,
elem ->> 1 :: integer AS col2
FROM
dat
CROSS JOIN
json_array_elements(dat.data -> 'DATA') WITH ORDINALITY AS elements(elem, idx)
ORDER BY idx
) x;
this code worked fine for me, maybe it be useful for someone.
select to_json(array_agg(t))
from (
select text, pronunciation,
(
select array_to_json(array_agg(row_to_json(d)))
from (
select part_of_speech, body
from definitions
where word_id=words.id
order by position asc
) d
) as definitions
from words
where text = 'autumn'
) t
Credits:
https://hashrocket.com/blog/posts/faster-json-generation-with-postgresql
Applies to Microsoft SQL Server 2008 R2.
The problem is
If we have a few dozen Outer Apply (30) then they begin to work pretty slowly. In the middle of the Outer Apply I have something more complicated than a simple select, a view.
Details
I'm writing a sort of attributes assigned to tables (in the database). Generally, a few tables, holds a reference to a table of attributes (key, value).
Pseudo structure looks like this:
DECLARE #Lot TABLE (
LotId INT PRIMARY KEY IDENTITY,
SomeText VARCHAR(8))
INSERT INTO #Lot
OUTPUT INSERTED.*
VALUES ('Hello'), ('World')
DECLARE #Attribute TABLE(
AttributeId INT PRIMARY KEY IDENTITY,
LotId INT,
Val VARCHAR(8),
Kind VARCHAR(8))
INSERT INTO #Attribute
OUTPUT INSERTED.* VALUES
(1, 'Foo1', 'Kind1'), (1, 'Foo2', 'Kind2'),
(2, 'Bar1', 'Kind1'), (2, 'Bar2', 'Kind2'), (2, 'Bar3', 'Kind3')
LotId SomeText
----------- --------
1 Hello
2 World
AttributeId LotId Val Kind
----------- ----------- -------- --------
1 1 Foo1 Kind1
2 1 Foo2 Kind2
3 2 Bar1 Kind1
4 2 Bar2 Kind2
5 2 Bar3 Kind3
I can now run a query such as:
SELECT
[l].[LotId]
, [SomeText]
, [Oa1].[AttributeId]
, [Oa1].[LotId]
, 'Kind1Val' = [Oa1].[Val]
, [Oa1].[Kind]
, [Oa2].[AttributeId]
, [Oa2].[LotId]
, 'Kind2Val' = [Oa2].[Val]
, [Oa2].[Kind]
, [Oa3].[AttributeId]
, [Oa3].[LotId]
, 'Kind3Val' = [Oa3].[Val]
, [Oa3].[Kind]
FROM #Lot AS l
OUTER APPLY(SELECT * FROM #Attribute AS la WHERE la.[LotId] = l.[LotId] AND la.[Kind] = 'Kind1') AS Oa1
OUTER APPLY(SELECT * FROM #Attribute AS la WHERE la.[LotId] = l.[LotId] AND la.[Kind] = 'Kind2') AS Oa2
OUTER APPLY(SELECT * FROM #Attribute AS la WHERE la.[LotId] = l.[LotId] AND la.[Kind] = 'Kind3') AS Oa3
LotId SomeText AttributeId LotId Kind1Val Kind AttributeId LotId Kind2Val Kind AttributeId LotId Kind3Val Kind
----------- -------- ----------- ----------- -------- -------- ----------- ----------- -------- -------- ----------- ----------- -------- --------
1 Hello 1 1 Foo1 Kind1 2 1 Foo2 Kind2 NULL NULL NULL NULL
2 World 3 2 Bar1 Kind1 4 2 Bar2 Kind2 5 2 Bar3 Kind3
The simple way to get the pivot table of attribute values and results for Lot rows that do not have attribute such a Kind3.
I know Microsoft PIVOT and it is not simple and do not fits here.
Finally, what will be faster and will give the same results?
In order to get the result you can unpivot and then pivot the data.
There are two ways that you can perform this. First, you can use the UNPIVOT and the PIVOT function:
select *
from
(
select LotId,
SomeText,
col+'_'+CAST(rn as varchar(10)) col,
value
from
(
select l.LotId,
l.SomeText,
cast(a.AttributeId as varchar(8)) attributeid,
cast(a.LotId as varchar(8)) a_LotId,
a.Val,
a.Kind,
ROW_NUMBER() over(partition by l.lotid order by a.attributeid) rn
from #Lot l
left join #Attribute a
on l.LotId = a.LotId
) src
unpivot
(
value
for col in (attributeid, a_Lotid, val, kind)
) unpiv
) d
pivot
(
max(value)
for col in (attributeid_1, a_LotId_1, Val_1, Kind_1,
attributeid_2, a_LotId_2, Val_2, Kind_2,
attributeid_3, a_LotId_3, Val_3, Kind_3)
) piv
See SQL Fiddle with Demo.
Or starting in SQL Server 2008+, you can use CROSS APPLY with a VALUES clause to unpivot the data:
select *
from
(
select LotId,
SomeText,
col+'_'+CAST(rn as varchar(10)) col,
value
from
(
select l.LotId,
l.SomeText,
cast(a.AttributeId as varchar(8)) attributeid,
cast(a.LotId as varchar(8)) a_LotId,
a.Val,
a.Kind,
ROW_NUMBER() over(partition by l.lotid order by a.attributeid) rn
from #Lot l
left join #Attribute a
on l.LotId = a.LotId
) src
cross apply
(
values ('attributeid', attributeid),('LotId', a_LotId), ('Value', Val), ('Kind', Kind)
) c (col, value)
) d
pivot
(
max(value)
for col in (attributeid_1, LotId_1, Value_1, Kind_1,
attributeid_2, LotId_2, Value_2, Kind_2,
attributeid_3, LotId_3, Value_3, Kind_3)
) piv
See SQL Fiddle with Demo.
The unpivot process takes the multiple columns for each LotID and SomeText and converts it into rows giving the result:
| LOTID | SOMETEXT | COL | VALUE |
--------------------------------------------
| 1 | Hello | attributeid_1 | 1 |
| 1 | Hello | LotId_1 | 1 |
| 1 | Hello | Value_1 | Foo1 |
| 1 | Hello | Kind_1 | Kind1 |
| 1 | Hello | attributeid_2 | 2 |
I added a row_number() to the inner subquery to be used to create the new column names to pivot. Once the names are created the pivot can be applied to the new columns giving the final result
This could also be done using dynamic SQL:
DECLARE #cols AS NVARCHAR(MAX),
#query AS NVARCHAR(MAX)
select #cols = STUFF((SELECT ',' + QUOTENAME(col+'_'+rn)
from
(
select
cast(ROW_NUMBER() over(partition by l.lotid order by a.attributeid) as varchar(10)) rn
from Lot l
left join Attribute a
on l.LotId = a.LotId
) t
cross apply (values ('attributeid', 1),
('LotId', 2),
('Value', 3),
('Kind', 4)) c (col, so)
group by col, rn, so
order by rn, so
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set #query = 'SELECT LotId,
SomeText,' + #cols + '
from
(
select LotId,
SomeText,
col+''_''+CAST(rn as varchar(10)) col,
value
from
(
select l.LotId,
l.SomeText,
cast(a.AttributeId as varchar(8)) attributeid,
cast(a.LotId as varchar(8)) a_LotId,
a.Val,
a.Kind,
ROW_NUMBER() over(partition by l.lotid order by a.attributeid) rn
from Lot l
left join Attribute a
on l.LotId = a.LotId
) src
cross apply
(
values (''attributeid'', attributeid),(''LotId'', a_LotId), (''Value'', Val), (''Kind'', Kind)
) c (col, value)
) x
pivot
(
max(value)
for col in (' + #cols + ')
) p '
execute(#query)
See SQL Fiddle with Demo
All three versions will give the same result:
| LOTID | SOMETEXT | ATTRIBUTEID_1 | LOTID_1 | VALUE_1 | KIND_1 | ATTRIBUTEID_2 | LOTID_2 | VALUE_2 | KIND_2 | ATTRIBUTEID_3 | LOTID_3 | VALUE_3 | KIND_3 |
-----------------------------------------------------------------------------------------------------------------------------------------------------------
| 1 | Hello | 1 | 1 | Foo1 | Kind1 | 2 | 1 | Foo2 | Kind2 | (null) | (null) | (null) | (null) |
| 2 | World | 3 | 2 | Bar1 | Kind1 | 4 | 2 | Bar2 | Kind2 | 5 | 2 | Bar3 | Kind3 |