Mule 4 Insert Date field into Oracle if it is present - database

I am trying to insert some data into an Oracle database and am using Mule 4 .
This is working with the current payload :
{
"empId" : "001",
"empName" : "Xyz"
}
The db insert :
<db:insert config-ref="Database_Config" target="insertEmp">
<db:sql><![CDATA[#[
"INSERT INTO EMP(EMP_ID,EMP_NAME) VALUES (:emp_id,:emp_name)" ]]]></db:sql>
<db:input-parameters>
<![CDATA[#[{emp_id: payload.empId,emp_name: payload.empName}]]]></db:input-parameters>
</db:insert>
Now the problem is the payload will contain empStartDate,empEndDate,createdDate
{
"empId" : "001",
"empName" : "Xyz",
"empStartDate" : "2019-07-17",
"empEndDate" : null , // can be null
"createdDate" : null // can be null
}
Now to insert into Oracle will need to convert empStartdate into a Date so I was thinking of using oracle Date function :
TO_DATE('2019-07-17', 'YYYY-MM-DD')
However I am not sure how to include this into my Mule flow ? Similarly in case of empEndDate if null should simply pass NULL while in case of createdDate I need to pass Oracle sysdate
Just not sure how to build the values dynamically and pass to Database ...

Related

Snowflake : Object_construct leaving null values when i used copy command to frame json file as out put

I use copy command of snowflake which is below returns a file with content json
copy into #elasticsearch/product/sf_index
from (select object_construct('id',id, alpha,'alpha')from table limit 1)
file_format = (type = json, COMPRESSION=NONE), overwrite=TRUE, single = TRUE, max_file_size=5368709120;
data is
id alpha
1 null
the output file is
{
"id" :1
}
but I need to have the null values
{
"id" : 1,
"alpha" : null
}
You can use the function OBJECT_CONSTRUCT_KEEP_NULL.
Documentation: https://docs.snowflake.com/en/sql-reference/functions/object_construct_keep_null.html
Example:
select OBJECT_CONSTRUCT_KEEP_NULL('id',id, alpha,'alpha')
Will it be possible for you to check programmatically if the value is null and it is null use the below
select object_construct('id',1,'alpha',parse_json('null'));
Per SnowFlake documentation
If the key or value is NULL (i.e. SQL NULL), the key-value pair will be omitted from the resulting object. A key-value pair consisting of a not-null string as key and a JSON NULL as value (i.e. PARSE_JSON(‘NULL’)) will not be omitted.
The other option is, just send it without the null attribute in Elastic and then take care of the retrieval from Elastic.
How about this
select object_construct('id',id, 'alpha', case when alpha is not null then alpha else 'null' end )from table limit 1;
case should be supported by the copy command.
"null" is a valid in json document as per this SO
Is null valid JSON (4 bytes, nothing else)
Ok another possible way is this using union
select object_construct('id',id, 'alpha', parse_json('NULL') )from table where alpha is null
union
select object_construct('id',id, 'alpha', alpha )from table where alpha is not null;
select object_construct('id', id,'alpha', IFNULL(alpha, PARSE_JSON('null'))) from table limit 1
Use IFNULL to check if the value is null and replace with JSON 'null'

read duplicate metrics in extract schema in U-SQL

` #input =EXTRACT firstname string,name string,name string FROM "/table1.csv"
USING Extractors.Csv(quoting : false, silent : true);
#output =SELECT * FROM #input;
OUTPUT #output TO "/data_output.csv" USING Outputters.Csv(quoting : false); `
extract schema contains duplicate metrics (NAME)
How can we read duplicate metrics ?
U-SQL reads columns by position in EXTRACT statement and not by name, so you can call your columns, for example, Name1 and Name2 (or something more logical to your business domain).
#input =EXTRACT firstname string,name1 string,name2 string FROM "/table1.csv"
USING Extractors.Csv(quoting : false);

How do I import an array of data into separate rows in a hive table?

I am trying to import data in the following format into a hive table
[
{
"identifier" : "id#1",
"dataA" : "dataA#1"
},
{
"identifier" : "id#2",
"dataA" : "dataA#2"
}
]
I have multiple files like this and I want each {} to form one row in the table. This is what I have tried:
CREATE EXTERNAL TABLE final_table(
identifier STRING,
dataA STRING
) ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
LOCATION "s3://bucket/path_in_bucket/"
This is not creating a single row for each {} though. I have also tried
CREATE EXTERNAL TABLE final_table(
rows ARRAY< STRUCT<
identifier: STRING,
dataA: STRING
>>
) ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
LOCATION "s3://bucket/path_in_bucket/"
but this is not work either. Is there some way of specifying that the input as an array with each record being an item in the array to the hive query? Any suggestions on what to do?
Here is what you need
Method 1: Adding name to the array
Data
{"data":[{"identifier" : "id#1","dataA" : "dataA#1"},{"identifier" : "id#2","dataA" : "dataA#2"}]}
SQL
SET hive.support.sql11.reserved.keywords=false;
CREATE EXTERNAL TABLE IF NOT EXISTS ramesh_test (
data array<
struct<
identifier:STRING,
dataA:STRING
>
>
)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
LOCATION 'my_location';
SELECT rows.identifier,
rows.dataA
FROM ramesh_test d
LATERAL VIEW EXPLODE(d.data) d1 AS rows ;
Output
Method 2 - No Changes to the data
Data
[{"identifier":"id#1","dataA":"dataA#1"},{"identifier":"id#2","dataA":"dataA#2"}]
SQL
CREATE EXTERNAL TABLE IF NOT EXISTS ramesh_raw_json (
json STRING
)
LOCATION 'my_location';
SELECT get_json_object (exp.json_object, '$.identifier') AS Identifier,
get_json_object (exp.json_object, '$.dataA') AS Identifier
FROM ( SELECT json_object
FROM ramesh_raw_json a
LATERAL VIEW EXPLODE (split(regexp_replace(regexp_replace(a.json,'\\}\\,\\{','\\}\\;\\{'),'\\[|\\]',''), '\\;')) json_exploded AS json_object ) exp;
Output
JSON records in data files must appear one per line, an empty line would produce a NULL record.
This json should work
{ "identifier" : "id#1", "dataA" : "dataA#1" },
{ "identifier" : "id#2", "dataA" : "dataA#2" }

Inserting data into user defined type in Cassandra

I am having trouble inserting JSON and updating data to my Cassandra table.
I have a table with the following structure:
CREATE TYPE keyspace.connected_hw (
connected_element text,
support_list frozen<list<text>>
);
CREATE TABLE keyspace.sample_table (
id text,
name text,
connected_items frozen<keyspace.connected_hw>
)
My insert script looks like this :
INSERT INTO keyspace.sample_table JSON '
{
"id": "12345",
"name": "object 1",
"connected_items": [
{
"connected_element": "56789",
"support_list": ""
}
]
}
'
After I run this, I get the following error :
Error decoding JSON value for connected_items: Expected a map, but got a ArrayList :
I tried to insert the other fields without the 'connected_items" list first, then updating it afterwards :
INSERT INTO keyspace.sample_table JSON '
{
"id": "12345",
"name": "object 1",
}
'
This works until I try to update 'connected_items' with
UPDATE keyspace.sample_table
SET connected_items = [{connected_element:'56789',support_list:['type1','type2']}]
where id='12345'
After running this I get the following error :
InvalidQueryException: Invalid list literal for connected_items of type frozen<connected_hw>
You are declaring connected_items as type connected_hw but you are inserting value as list of 'connected_hw ' and also support_list is a list of text not just text
You Either change your insert/update format like below :
INSERT INTO sample_table JSON '{"id": "12345", "name": "object 1", "connected_items": { "connected_element": "56789", "support_list": ["a","b"] } }' ;
And
UPDATE sample_table SET connected_items = {connected_element:'56789',support_list:['type1','type2']} where id='12345' ;
Or Change the type of connected_items to list of connected_hw
ALTER TABLE sample_table DROP connected_items ;
ALTER TABLE sample_table ADD connected_items list<frozen <connected_hw>>;

How to update PostgreSQL DB timestamp to null in Asterisk?

I use ast_update_realtime() to update to PostgreSQL DB.
res = ast_update_realtime("confinfo", "id", "01", "start_time", "NULL", SENTINEL);
But I got an error like this:
[Oct 20 15:44:50] ERROR[30428][C-00000000]: res_config_pgsql.c:169 _pgsql_exec: PostgreSQL RealTime: Query Failed because: ERROR: invalid input syntax for type timestamp with time zone: "NULL"
LINE 1: UPDATE confinfo SET start_time = 'NULL' WHERE id = '01'
^
(PGRES_FATAL_ERROR)
I found the reason is "NULL", not is NULL in SQL string.
How could I correct it?
UPDATE confinfo SET start_time = NULL WHERE id = '01'
instead of
UPDATE confinfo SET start_time = 'NULL' WHERE id = '01'
So you should pass NULL without ""
Instead of providing NULL value you can try put empty string like '' or simply
res = ast_update_realtime("confinfo", "id", "01", "start_time", NULL, SENTINEL);

Resources