SEQUENCE in DB2 - database

I need to create a sequence in DB2 from the TIMESTAMP, example: today is 08/25/2022 so the sequence must be:
082520221
082529222
082520223
082520224
And when the server changes the date at midnight to 08/26/2022 the sequence will change to:
082620221
082620222
082620223
082620224
I can also use functions
Version 7 Release 3

Using recursion you can create a CTE that has rows containing values in sequence and from there just contact the date and the sequence value.
For example; "MYSEQUENCE" is a CTE containing 2 columns (value, maxvalue) and 100 rows where "value" is 1 to 100;
with MYSEQUENCE (VALUE, MAXVALUE) as (
select *
from (values(1,100))
union all
select VALUE + 1, MAXVALUE
from MYSEQUENCE
where VALUE + 1 <= MAXVALUE)
select VARCHAR_FORMAT(current timestamp, 'MMDDYYYY') || VALUE
from MYSEQUENCE
Results;
083120221
083120222
083120223
083120224
083120225
083120226
083120227
083120228
083120229
0831202210
0831202211
0831202212
0831202213
0831202214
0831202215
0831202216
0831202217
0831202218
0831202219
0831202220
0831202221
...

Related

Snowflake external table partition by "Defining expression for partition column year is invalid"

I have a parquet asset in s3 and wish to make an external table from this asset
The asset is partitioned by year, month, day and hour.
My DDL is below
CREATE OR REPLACE external TABLE abc (
"year" int as (value:"partition_0"::int),
"month" int as (value:"partition_1"::int),
"day" int as (value:"partition_2"::int),
"hour" int as (value:"partition_3"::int),
"partition_key" varchar as (METADATA$EXTERNAL_TABLE_PARTITION)
)
PARTITION BY ("year", "month", "day", "hour")
PARTITION_TYPE = USER_SPECIFIED
WITH location = #abc
auto_refresh = true
file_format = (type = parquet);
When I try to partition by the following I get the following error
PARTITION BY ("year", "month", "day", "hour")
>>>Error: Defining expression for partition column year is invalid.
When I try to partition by partition_key as below, I don't get an error, but the external table is now empty
PARTITION BY ("partition_key")
>>> empty table
Anyone know what's going on here and how I can rectify this?
Ok, so using the CITIBIKE data which has parquet files:
s3://snowflake-workshop-lab/citibike-trips-parquet/
us-east-1
EXTERNAL
AWS
I can recreate the error using the file names parts as year/month/day:
create or replace external table cb (
trip_id int as (value:TRIPID::int),
filename char as (metadata$FILENAME),
year int as (split_part(metadata$FILENAME,'/', 2)::int),
month int as (split_part(metadata$FILENAME,'/', 3)::int),
day int as (split_part(metadata$FILENAME,'/', 4)::int)
)
partition by (year, month, day)
partition_type = USER_SPECIFIED
with location = #CITIBIKE_TRIPS_PARQUET
auto_refresh = true
file_format = (type=parquet);
Defining expression for partition column YEAR is invalid.
If I comment out:
--partition by (year, month, day)
--partition_type = USER_SPECIFIED
It creates and I can read rows:
select * from cb limit 2;
VALUE
TRIP_ID
FILENAME
YEAR
MONTH
DAY
{ "BIKEID": "2013-268", ...
813124
citibike-trips-parquet/2013/06/10/data_01a19496-0601-8b21-003d-9b03003c624a_1106_0_0.snappy.parquet
2,013
6
10
{ "BIKEID": "2013-220", ...
813161
citibike-trips-parquet/2013/06/10/data_01a19496-0601-8b21-003d-9b03003c624a_1106_0_0.snappy.parquet
2,013
6
10
Reading the create-external-table docs for partitioning-parameters the AWS example pulls a paths apart, but turns it back into a single date field:
create external table et1(
date_part date as to_date(split_part(metadata$filename, '/', 3)
|| '/' || split_part(metadata$filename, '/', 4)
|| '/' || split_part(metadata$filename, '/', 5), 'YYYY/MM/DD'),
timestamp bigint as (value:timestamp::bigint),
col2 varchar as (value:col2::varchar))
partition by (date_part)
thus:
create or replace external table cb (
trip_id int as (value:TRIPID::int),
filename char as (metadata$FILENAME),
date_part date as to_date(
split_part(metadata$FILENAME,'/', 2) || '/' ||
split_part(metadata$FILENAME,'/', 3) || '/' ||
split_part(metadata$FILENAME,'/', 4), 'YYYY/MM/DD')
)
partition by (date_part)
--partition_type = USER_SPECIFIED
with location = #CITIBIKE_TRIPS_PARQUET
auto_refresh = true
file_format = (type=parquet);
and then add a filter to my query:
select * from cb where date_part > '2020-04-01' limit 2;
and the query profile shows a limited set of partitions where scanned (as one would expect)
if I add partition_type = USER_SPECIFIED back into the code I get the error:
Defining expression for partition column DATE_PART is invalid.
which makes me think the original example would have worked, if this was dropped also, which testings shows that it does:
create or replace external table cb (
trip_id int as (value:TRIPID::int),
filename char as (metadata$FILENAME),
year int as (split_part(metadata$FILENAME,'/', 2)::int),
month int as (split_part(metadata$FILENAME,'/', 3)::int),
day int as (split_part(metadata$FILENAME,'/', 4)::int)
)
partition by (year, month, day)
--partition_type = USER_SPECIFIED
with location = #CITIBIKE_TRIPS_PARQUET
auto_refresh = true
file_format = (type=parquet);
select * from cb where year = 2018 limit 2;
TRIP_ID
FILENAME
YEAR
MONTH
DAY
145400
citibike-trips-parquet/2018/01/06/data_01a19496-0601-8b21-003d-9b03003c624a_2906_6_0.snappy.parquet
2,018
1
6
145545
citibike-trips-parquet/2018/01/06/data_01a19496-0601-8b21-003d-9b03003c624a_2906_6_0.snappy.parquet
2,018
1
6
So reading the docs again, after all that:
Defines the partition type for the external table as user-defined. The owner of the external table (i.e. the role that has the OWNERSHIP privilege on the external table) must add partitions to the external metadata manually by executing ALTER EXTERNAL TABLE … ADD PARTITION statements.
Do not set this parameter if partitions are added to the external table metadata automatically upon evaluation of expressions in the partition columns.
I suspect the first partition by line is the exactly what this note is referring to in the "don't use this if you did that"..

POWER BI - workaround possible for recursive calculation of my dax measure?

I have 2 tables on orders with the order validity date and in transit stock (stock reaching where the order will be serviced).
(using sample data to simplify for understanding)
I am looking for a final calculation like this in my final table -
have done the calculation till column 4 in power BI
if this was in excel i could have simply done
used_stock(2) = serviced(1) + used_stock(1)
avail_stock(2) = total_qty(2) - used_stock(2)
serviced(2) = min(order(2),avail_stock(2))
My base tables look like this - 
order table -
intransit table -
I have done the total_qty measure calculation by finding the cumulative sum of shipment quantity for the dates before selected value of order validity date.
I am trying to do the rest of the measures but ending up in circular references. Is there a way I can do it?
edit -
Clarifying it a bit more for the logic needed -
let's say the 2nd order is 15 and the 2nd shipment reaches on 24th, then the base data and output table should look like this -
With present proposed solution the table will erroneously look like -
Try with these below measures-
total_qty =
VAR current_row_date = MIN('order'[order valid till date])
RETURN
CALCULATE(
SUM(intrasit[quantity in shipment]),
FILTER(
ALL(intrasit),
intrasit[expected date of reaching] < current_row_date
)
)
used_stock =
VAR current_row_date = MIN('order'[order valid till date])
RETURN
CALCULATE(
SUM('order'[order quantity]),
FILTER(
ALL('order'),
'order'[order valid till date] < current_row_date
)
) + 0
avail_stock = [total_qty] - [used_stock]
serviced =
IF(
MIN('order'[order quantity]) <= [avail_stock],
MIN('order'[order quantity]),
[avail_stock]
)
Here is your final output-

Function to extract array items to different columns postgresql

I'm trying to design a function to solve this problem. I have column with cities that looks like this.
1 |Curaçao-Amsterdam
2 |St. Christopher-Essequibo
3 |Texel-Riohacha-Buenos Aires-La Rochelle`
And I have used this query to extract it to an array of elements
select t2.rut1,t2.rutacompleta, t2.id
from (
select regexp_split_to_array(t.rutacompleta, E'[\-]+') as rut1,
t.rutacompleta,t.id
from (
select id, strpos(ruta, '-') as posinic, strpos(ruta, '-') as posfin,
ruta as rutacompleta
from dyncoopnet.todosnavios2
) t
) t2
That gives this result:
{Curaçao,Amsterdam}
{"St. Christopher",Essequibo}
{Texel,Riohacha,"Buenos Aires","La Rochelle"}`
And I want to create a function to extract * array elements to different columns. I have thought of a while function like this:
create or replace function extractpuertos()
returns text as
$body$
declare
i integer;
puerto text;
begin
i := 1
while (i >=1)
loop
with tv as(
select t2.rut1,t2.rutacompleta, t2.id from(
select regexp_split_to_array(t.rutacompleta, E'[\-]+') as rut1,
t.rutacompleta,t.id from(
select id, strpos(ruta, '-') as posinic, strpos(ruta, '-') as posfin,ruta as
rutacompleta from dyncoopnet.todosnavios2) t)t2
)
select tv.rut1[i] as puerto from tv;
end loop;
return puerto;
end;
But I'm not sure it is a proper solution, and how to implement it. Any hint?
Thanks in advance!
is it what you try to do?
create table:
t=# create table so65 (i int, t text);
CREATE TABLE
Time: 55.234 ms
populate data:
t=# copy so65 from stdin delimiter '|';
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> 1 |Curaçao-Amsterdam
2 |St. Christopher-Essequibo
3 |Texel-Riohacha-Buenos Aires-La Rochelle>> >>
>> \.
COPY 3
Time: 2856.465 ms
split:
t=# select string_to_array(t,'-') from so65;
string_to_array
-----------------------------------------------
{Curaçao,Amsterdam}
{"St. Christopher",Essequibo}
{Texel,Riohacha,"Buenos Aires","La Rochelle"}
(3 rows)
Time: 4.428 ms
to one column:
t=# select unnest(string_to_array(t,'-')) from so65;
unnest
-----------------
Curaçao
Amsterdam
St. Christopher
Essequibo
Texel
Riohacha
Buenos Aires
La Rochelle
(8 rows)
Time: 1.662 ms

Listagg for large data and included values in quotes

I would like to get all the type names of a user seperated in commas and included in single quotes. The problem I have is that &apos ; character is displayed as output instead of '.
Trial 1
SELECT LISTAGG(TYPE_NAME, ''',''') WITHIN GROUP (ORDER BY TYPE_NAME)
FROM ALL_TYPES
WHERE OWNER = 'USER1';
ORA-01489: result of string concatenation is too long
01489. 00000 - "result of string concatenation is too long"
*Cause: String concatenation result IS more THAN THE maximum SIZE.
*Action: Make sure that the result is less than the maximum size.
Trial 2
SELECT '''' || RTRIM(XMLAGG(XMLELEMENT(E,TYPE_NAME,q'$','$' ).EXTRACT('//text()')
ORDER BY TYPE_NAME).GetClobVal(),q'$','$') AS LIST
FROM ALL_TYPES
WHERE OWNER = 'USER1';
&apos ;TYPE1&apos ;,&apos ;TYPE2&apos ;, ............... ,&apos;TYPE3&apos ;,&apos ;
Trial 3
SELECT
dbms_xmlgen.CONVERT(XMLAGG(XMLELEMENT(E,TYPE_NAME,''',''').EXTRACT('//text()')
ORDER BY TYPE_NAME).GetClobVal())
AS LIST
FROM ALL_TYPES
WHERE OWNER = 'USER1';
TYPE1&amp ;apos ;,&amp ;apos ;TYPE2&amp ;apos ;, ......... ,&amp ;apos ;TYPE3&amp ;apos ;,&amp ;apos ;
I don;t want to call replace function and then make substring as follow
With tbla as (
SELECT REPLACE('''' || RTRIM(XMLAGG(XMLELEMENT(E,TYPE_NAME,q'$','$' ).EXTRACT('//text()')
ORDER BY TYPE_NAME).GetClobVal(),q'$','$'),'&apos;',''') AS LIST
FROM ALL_TYPES
WHERE OWNER = 'USER1')
select SUBSTR(list, 1, LENGTH(list) - 2)
from tbla;
Is there any other way ?
use dbms_xmlgen.convert(col, 1) to prevent escaping.
According to Official docs, the second param flag is:
flag
The flag setting; ENTITY_ENCODE (default) for encode, and
ENTITY_DECODE for decode.
ENTITY_DECODE - 1
ENTITY_ENCODE - 0 default
Try this:
select
''''||substr(s, 1, length(s) - 2) list
from (
select
dbms_xmlgen.convert(xmlagg(xmlelement(e,type_name,''',''')
order by type_name).extract('//text()').getclobval(), 1) s
from all_types
where owner = 'USER1'
);
Tested the similar code below with 100000 rows:
with t (s) as (
select level
from dual
connect by level < 100000
)
select
''''||substr(s, 1, length(s) - 2)
from (
select
dbms_xmlgen.convert(xmlagg(XMLELEMENT(E,s,''',''') order by s desc).extract('//text()').getClobVal(), 1) s
from t);

case statement filter in MDX query

I want to write the following T SQL query in MDX
Select count(bugs),priority from table
where
Case when priority =1 then startdate< dateadd(dd,-7,getdate())
when priority =2 then startdate< dateadd(dd,-14,getdate())
end
group by priority
Tried the following but not working
WITH MEMBER [Measures].CHECKING
AS
CASE [Item].[ startdate].CurrentMember
WHEN [Item].[ Priority].&[1] THEN [Item].[startdate]<DATEADD(DAY,-7,NOW())
WHEN [Item].[ Priority].&[2] THEN [Item].[startdate]<DATEADD(DAY,-14,NOW())
END
SELECT
NON EMPTY{[Measures].[Count], [Measures].CHECKING }ON COLUMNS
,NON EMPTY{([Item].[ Priority].[ Priority].ALLMEMBERS )}
I am new to MDX queries, any suggestions on how to approach this please..
Your CASE logic has a basic problem. The statement cannot result in a condition. It can only result in a value that you then compare to something else.
To take your tSQL example, I think it should read more like this:
Select count(bugs),priority from table
where
1 = Case when priority = 1 and startdate< dateadd(dd,-7,getdate()) Then 1
when priority = 2 and startdate< dateadd(dd,-14,getdate()) then 1
else 0 end
group by priority
A cleaner way to write this would be to skip the CASE altogether.
Select count(bugs),priority from table
where
(priority = 1 and startdate< dateadd(dd,-7,getdate()))
or
(priority = 2 and startdate< dateadd(dd,-14,getdate()))
group by priority
I am assuming the following:
Your startdate hierarchy is an attribute hierarchy, not a user hierarchy and
the current day is its last member.
Then the following MDX should deliver what you want:
SELECT
{ [Measures].[Count] }
ON COLUMNS
,
{ [Item].[ Priority].&[1], [Item].[ Priority].&[2] }
ON ROWS
FROM (
SELECT ({ [Item].[ Priority].&[1] }
*
([Item].[ startdate].[ startdate].Members
- Tail([Item].[ startdate].[ startdate].Members, 7)
)
)
+
({ [Item].[ Priority].&[2] }
*
([Item].[ startdate].[ startdate].Members
- Tail([Item].[ startdate].[ startdate].Members, 14)
)
)
ON COLUMNS
FROM [yourCube]
)
Your code [Item].[startdate]<DATEADD(DAY,-7,NOW()) does not work in MDX for several reasons: firstly, [Item].[startdate] is a hierarchy, and hence cannot be compared using <. Secondly, even if you would re-state it as [Item].[startdate].CurrentMember < DATEADD(DAY,-7,NOW()), you would have a member on the left side of the <, and a date , i. e. a value, on the right side. One of the important things to keep in mind with MDX are the different types of objects: Hierarchies, levels, members, tuples, sets. And all these are not values. You do not just have columns like in SQL.

Resources