how to insert random timestamp in snowflake timestamp field - snowflake-cloud-data-platform

I am trying the following SQL but I get an error:
CREATE TABLE slipstream( visit_timestamp time);
INSERT INTO slipstream values(dateadd(second, uniform(1, 10, random()), current_time()));
Error:
Invalid expression [DATE_ADDSECONDSTOTIME(CAST(UNIFORM(1, 10, RANDOM()) AS NUMBER(2,0)), '12:52:29.050000000')] in VALUES clause
Please advise

Using INSERT - SELECT pattern:
INSERT INTO slipstream (visit_timestamp)
SELECT dateadd(second, uniform(1, 10, random()), current_time());
Output:

The VALUES requires constants and allows for simple casting that the SQL parser can do, thus in the values line you can have '2022-06-21'::date and that will correctly cast to DATE.
SELECT column1, system$typeof(column1) FROM VALUES
('2022-06-21'::date),
('2022-05-21'::date);
gives:
COLUMN1
SYSTEM$TYPEOF(COLUMN1)
2022-06-21
DATE[SB4]
2022-05-21
DATE[SB4]
And thus for this workflow it would have been valid to use:
INSERT INTO slipstream VALUES
('2022-06-21'::timestamp),
('2022-05-21'::timestamp);
number of rows inserted
2
But complex function calls need to be executed SQL, thus as Lukasz mentioned the need for the INSERT/SELECT pattern.
Now you can be clever and put a VALUES on the SELECT to data drive the parameters to those functions (I was hoping to be clever as using the values to uniform, but those need to be constants, so settled for an offset)
CREATE TABLE slipstream( visit_timestamp time);
INSERT INTO slipstream
SELECT dateadd(second, uniform(1, 10, random() ) + column1, current_time() )
FROM VALUES
(0),
(-10),
(-20);
number of rows inserted
3
SELECT * FROM slipstream;
VISIT_TIMESTAMP
16:18:04
16:17:55
16:17:42

Related

SQL Server : display column with each data set

I'm exporting data out to a flat file to import into a SIEM. Is there a way to display as "column name = data" for each item?
SELECT
[Description], [MessageDescription], [CardNumber],
[TimeStamp_Server], [SPMDescription] [CardHolderID],
[FirstName], [MiddleName], [LastName],
[CardStatusID], [CardStatusDescription], [Imprint],
[TransactionNumber]
FROM
[DB].[dbo].[Message]
WHERE
CONVERT(varchar(10), TimeStamp_Server, 120) = CONVERT(varchar(10), GETDATE(), 120)
Here is how it currently presents in the flat file.
Description,MessageDescription,CardNumber,TimeStamp_Server,CardHolderID,FirstName,MiddleName,LastName,CardStatusID,CardStatusDescription,Imprint,TransactionNumber
North Entry,AccessGrantedNoEntry,0,2023-02-08 09:52:19,Retail Center C004 Converted PXL250-2DOOR,,,,,,,527312
I'd like it to display as this for each row
Description = North Entry,
MessageDescription = AccessGrantedNoEntry,
CardNumber = 0,
TimeStamp_Server = 2023-02-08
... and so on.
This is a side issue (so community wiki), but you can significantly improve performance of this query by changing the WHERE clause like this (assuming TimeStamp_Server is a DateTime column):
WHERE TimeStamp_Server >= cast(cast(getdate() as date) as datetime)
AND TimeStamp_Server < cast(dateadd(day, 1, cast(getdate() as date)) as datetime)
This helps in three ways:
Thanks to cultural/internationalization issues, converting dates to and from string values is far slower and more error-prone than we'd like to believe. Sticking with Date functions and types will always perform better and be more accurate.
By shifting all the modifications to getdate(), so TimeStamp_Server is unaltered, we avoid needing to do the conversion on every row in the table.
By shifting all the modifications to getdate(), so TimeStamp_Server is unaltered, we preserve the use of any index that might exist for the column. This cuts to the core of database performance.
With a bit of JSON and string_agg()
Select B.NewVal
From (
--Your Query Here--
) A
Cross Apply (
Select NewVal = string_agg(concat([key],' = ',value),',')
From openjson( (Select A.* For JSON Path,Without_Array_Wrapper ) )
) B
A minimal reproducible example is not provided. So, I am shooting from the hip.
Please try the following conceptual example.
IMHO, the JSON or XML output format would be much more reliable.
SQL
-- DDL and sample data population, start
DECLARE #tbl TABLE (id INT PRIMARY KEY, case_date DATE, Product INT);
INSERT INTO #tbl (id, case_date, Product) VALUES
(55, '2022-08-01', 11),
(66, '2022-05-21', 51);
-- DDL and sample data population, end
SELECT t.*
, result = STUFF(XMLData.query('
for $x in /root/*
return concat(", ", local-name($x), "=", $x/text()[1])
').value('.', 'VARCHAR(4096)'), 1,2,'')
FROM #tbl AS t
CROSS APPLY (SELECT t.* FOR XML PATH(''), TYPE, ROOT('root')) AS t1(XMLData);
Output
id
case_date
Product
result
55
2022-08-01
11
id=55 , case_date=2022-08-01 , Product=11
66
2022-05-21
51
id=66 , case_date=2022-05-21 , Product=51
Thank you #Stu for providing what I was seeking in a comment:
So you want every column name reproduced on every row? That's going to significantly increase the size of the file! Just use
select concat('columnName = ', ColumnName)Columename, ...

Casting invalid string date to date in Snowflake

I have a column with invalid date strings in snowflake and I need to cast them to date. I tried with TRY_CAST, TRY_DATE, IS_DATE but nothings seems to work, i.e.
select * from (values (1, TRY_CAST('1985-02-30' as date)));
Invalid expression [TRY_CAST('1985-02-30' AS DATE)] in VALUES clause
Is there an easy way to do a validation on the date itself?
The VALUES requires simple expression or constants. The alternative is SELECT:
select * from (values (1, TRY_CAST('1985-02-30' as date)));
=>
select * from (select 1, TRY_CAST('1985-02-30' as date));

How to remove trailing zeros from milliseconds value(datetime 2)in string format by one query

I used SQL Server. I converted value from datetime2 column with different scales to string and compare them to process null values if they are exist. So, I need to convert this data without tailing nulls by one query without any procedures
For example,
'2018-06-23 07:30:20.100' should be '2018-06-23 07:30:20.1'
'2018-06-23 07:30:20.000' should be '2018-06-23 07:30:20.'
'2018-06-23 07:30:20.101' should be '2018-06-23 07:30:20.101'
I used following:
select CONVERT(VARCHAR, col1, 126) from [DBO].[DATE_TABLE1]
But it shows unexpected result:
'2018-06-23 07:30:20.100' defined as '2018-06-23 07:30:20.100' - **unexpected(trailing zeros weren't removed)**
'2018-06-23 07:30:20.000' defined as '2018-06-23 07:30:20' - expected
'2018-06-23 07:30:20.101' defined as '2018-06-23 07:30:20.101' - expected
how can I convert datatime2 value without trailing zeros?
Thank you
Try this:
DECLARE #DataSource TABLE
(
[value] DATETIME2(3)
);
INSERT INTO #DataSource ([value])
VALUES ('2018-06-23 07:30:20.100')
,('2018-06-23 07:30:20.000')
,('2018-06-23 07:30:20.101');
SELECT [value]
,CONVERT(VARCHAR(20), [value], 121) + REPLACE(FORMAT(DATEPART(MILLISECOND, [value]) / 1000.0, 'g3'), '0.', '') AS [new_value]
FROM #DataSource;
Thank you for all. On the contrary, I decided to add zeros until max fraction seconds value 7. And I understand that this solution conflicts with my question, but it helps to compare datetime2 values in the string format. I decided to use following statement:
CONVERT(VARCHAR, CAST({{ columnName }} AS DATETIME2), 121)
I used
This value will get information with all fraction seconds values, therefore I can able to compare values from columns with different scale(fractional seconds) values.
If it had been Snowflake, I would 'YYYY-MM-DD HH:MI:SS.FF9'...
In sql server, for example, we have 2 table with column that has datetime2 with different values.
create table [DBO].[DATE_TABLE1] (col1 datetime2(1))
create table [DBO].[DATE_TABLE2] (col1 datetime2(7))
I inserted same value into them '2018-06-23 07:30:20.1'
After performing 'CONVERT(VARCHAR, CAST({{ columnName }} AS DATETIME2), 121)' for 2 table I will get same string values:
'2018-06-23 07:30:20.10000000' from [DBO].[DATE_TABLE1]
'2018-06-23 07:30:20.10000000' from [DBO].[DATE_TABLE2]
And these values will be equal.
If I used 'CONVERT(VARCHAR, {{ columnName }} , 121)', I will get different values:
'2018-06-23 07:30:20.1' from [DBO].[DATE_TABLE1]
'2018-06-23 07:30:20.10000000' from [DBO].[DATE_TABLE2]

Insert derived sum value of a column into another table as a record in oracle database

I got error on oracle database
ORA-00936: missing expression
I dont know how to properly insert a record on a table which is a sum of all the value from another table.
INSERT INTO initial_transaction_inventory
VALUES (10000, SELECT SUM (pyi_total_price) FROM payable_inventory, SELECT SUM (pai_total_cost) FROM paid_inventory, SYSDATE, utl_raw.cast_to_raw ('C:\Users\username\Documents'));
This is what i'm trying to do with my codes.
You just need to use the brackets as follows:
INSERT INTO initial_transaction_inventory
VALUES (10000,
(SELECT SUM (pyi_total_price) FROM payable_inventory),
(SELECT SUM (pai_total_cost) FROM paid_inventory),
SYSDATE,
utl_raw.cast_to_raw ('C:\Users\username\Documents'));

SQL Server Insert Command Error

INSERT INTO BORCODEME
( BORCODEME.IslemTarihi, BORCODEME.IslemAciklamasi,BORCODEME.IslemTutari)
VALUES(
(SELECT BORCLAR.BorcTarih,BORCLAR.BorcAciklama,BORCLAR.BorcTutari FROM BORCLAR WHERE BORCLAR.BorcMusteriID=6),
(SELECT ODEMELER.OdemeTarihi,ODEMELER.OdemeAciklama,ODEMELER.OdemeTutar FROM ODEMELER WHERE ODEMELER.OdemeMusteriID=6)
)
My SQL command is this, and I have these errors;
Msg 116, Level 16, State 1, Line 4
Only one expression can be specified in the select list when the subquery is not introduced with EXISTS.
Msg 116, Level 16, State 1, Line 6
Only one expression can be specified in the select list when the subquery is not introduced with EXISTS.
Msg 109, Level 15, State 1, Line 1
There are more columns in the INSERT statement than values specified in the VALUES clause. The number of values in the VALUES clause must match the number of columns specified in the INSERT statement.
Not sure what you're really looking for - are you trying to insert the three columns from the two tables? Then write your INSERT like this:
INSERT INTO BORCODEME(IslemTarihi, IslemAciklamasi, IslemTutari)
SELECT
BORCLAR.BorcTarih, BORCLAR.BorcAciklama, BORCLAR.BorcTutari
FROM
BORCLAR
WHERE
BORCLAR.BorcMusteriID = 6
UNION
SELECT
ODEMELER.OdemeTarihi, ODEMELER.OdemeAciklama, ODEMELER.OdemeTutar
FROM
ODEMELER
WHERE
ODEMELER.OdemeMusteriID = 6
So this will insert the three values from BORCLAR and another row with the three values from ODEMELER.
If that's not what you want, then you need to explain in more detail what you really want instead.....
In general, you can either use this syntax:
INSERT INTO dbo.TargetTable (List-of-Columns)
VALUES (List-of-atomic-values)
or if you cannot provide atomic values (literals or T-SQL variables), then you can use
INSERT INTO dbo.TargetTable (List-of-Columns)
SELECT list-of-columns
FROM dbo.SourceTable
(but you cannot mix - you cannot have VALUES and then use SELECT inside of it)
In both cases, the number of columns in the INSERT statement must match exactly with the number of atomic values provided in VALUES or the number of columns selected by the SELECT statement

Resources