Talend parse Date "yyyy-MM-dd'T'HH:mm:ss'.000Z'" - salesforce

I have an error parsing a date in Talend.
My input is an excel file as String and my output is a Date with the following Salesforce format "yyyy-MM-dd'T'HH:mm:ss'.000Z'"
I have a tMap with this connection
TalendDate.parseDate("yyyy-MM-dd'T'HH:mm:ss'.000Z'",Row1.firstDate)
but is throwing the following error:
java.lang.RuntimeException: java.text.ParseException: Unparseable
date: "2008-05-11T12:02:46.000+0000" at
routines.TalendDate.parseDate(TalendDate.java:895)
Any help?
Thanks

In TalendDate.parseDate, the parameter "pattern" must match the pattern of the input String, and not the pattern of the Date you want in the output.
You can try :
TalendDate.parseDate("yyyy-MM-dd'T'HH:mm:ss'.000+0000'",Row1.firstDate )
Formatting of Date output is accessible in the 'schema' menu, in "Date Model" column.

Try this,
TalendDate.parseDate("MM/dd/yyyy",'T'HH:mm:ss',Row1.firstDate);

Related

I cannot create a table in Bigquery due to schema issue

I am having difficulty with a creating table in Bigquery.
The table I tried to upload looks like this.
enter image description here
First, I tried auto-detect but it didn't work.
And error message is like this:
Error while reading data, error message: Could not parse '4/12/2016 12:00:00 AM' as TIMESTAMP for field SleepDay (position 1) starting at location 65 with message 'Invalid time zone: AM'
So, I tried Edit as text like this:
Id:INTEGER, SleepDay:DATETIME, TotalSleepRecords:INTEGER, TotalMinutesAsleep: INTEGER, TotalTimeInBed: INTEGER
And now I see error like this:
Error while reading data, error message: Could not parse 'Id' as INT64 for field Id (position 0) starting at location 0 with message 'Unable to parse'
Isn't the Id like 1503960366 integer?
How should I change this?
Could not parse 'Id' as INT64 for field Id (position 0) starting at
location 0 with message 'Unable to parse'
This error message reads to me as you are trying to read in the header ('Id') as an int, too. There is an option to skip one or more header row(s) under "Advanced options" when adding a table to BigQuery, try adding the number 1 there to skip the top row.

Snowflake - Setting Date Time format in result of Query

I'm running a query in snowflake to then export. I need to set/convert a date value to the following format 'yyyy-MM-ddThh:mm:ss' from 2022-02-23 16:23:58.805
I'm not sure what is the best way to convert the date format. I've tried using TO_TIMESTAMP, but keep getting the following error '1 too many arguments for function [TO_TIMESTAMP(FSA.LAST_UPDATED, 'yyyy-MM-ddThh:mm:ss')] expected 1, got 2'
This looks like a conversion issue. Please check datatype for your column last_updated. Also seems there is some typo in your question - for the time portion in format, use mi (hh:mi:ss).
Refer below -
select to_timestamp('2022-02-23 16:23:58.805'::TIMESTAMP,'yyyy-mm-dd hh:mi:ss.ff')
;
000939 (22023): SQL compilation error: error line 1 at position 7
**too many arguments for function
[TO_TIMESTAMP(TO_TIMESTAMP_NTZ('2022-02-23 16:23:58.805'), 'yyyy-mm-dd hh:mi:ss.ff')] expected 1, got 2**
select to_timestamp('2022-02-23 16:23:58.805'::string,'yyyy-mm-dd hh:mi:ss.ff');
TO_TIMESTAMP('2022-02-23 16:23:58.805'::STRING,'YYYY-MM-DD HH:MI:SS.FF')
2022-02-23 16:23:58.805
TO_TIMESTAMP is for string -> timestamp, TO_CHAR is for timestamp -> string of which the TO_CHAR( <date_or_time_expr> [, '<format>' ] ) form is the one you seem to be wanting.
this SQL show string -> timestamp -> formatted string
SELECT
'2022-02-23 16:23:58.805' as time_string,
to_timestamp(time_string) as a_timestamp,
to_char(a_timestamp, 'yyyy-MM-ddThh:mm:ss') as formating_string;
TIME_STRING
A_TIMESTAMP
FORMATING_STRING
2022-02-23 16:23:58.805
2022-02-23 16:23:58.805
2022-02-23T16:02:58

Create current date in String format and parse to date as string in Apex

Goal:
I need to first create a String representing the current date.
Afterwards this String needs to be parsed and used to build an instance of the Date class.
Initial attempt:
In my test class I create a current date as a String input for my tested method in the following manner:
String inputDate = date.today().format(); // 13:28:15:378 USER_DEBUG [24]|DEBUG|17.3.2017
However, when I attempt to create an instance of a Date object like this:
Date dateFromInput = date.valueOf(inputDate);
I receive the following error:
13:28:15:398 FATAL_ERROR System.TypeException: Invalid date: 17.3.2017
Following code
((DateTime)Dob).format('YYYY-MM-dd')
Just Works
Date.format() will return string in current local date format of logged in user.
Date.valueOf needs input string in format yyyy-MM-dd HH:mm:ss in the local time zone.
Below should work:
String inputDate = date.today().format('**yyyy-MM-dd HH:mm:ss**');
Date dateFromInput = date.parse(inputDate);
In the documentation, there is a difference between the parse and valueOf Date methods that escaped me:
parse(stringDate)
Constructs a Date from a String. The format of the String depends on the local date format.
valueOf(stringDate)
Returns a Date that contains the value of the specified String.
What I wanted was the parse:
String inputDate = date.today().format(); /
Date dateFromInput = date.parse(inputDate);
You can try Moment.apex. Here is the link
Datetime dt = new Moment('2018/01/12 10:00:00', 'yyyy/MM/dd HH:mm:ss').toDatetime();

year coming wrong in TO_DATE function in db2

I tried to run below query in DB2 database:
My date string: 122887 mmddyy
select DATE(TO_DATE('122887', 'mmddyy')) from SYSIBM.dual;
now result is: 2087-12-28
But i am expecting below 1987-12-28.
How to achieve this?
You need to use the "adjusted year" for your query. Instead of YY it is RR:
values(DATE(TO_DATE('122887', 'mmddrr')))"
1
----------
12/28/1987
Details are in the documentation for TO_DATE/TIMESTAMP_FORMAT.

Hive Serde errors with Array<Struct<>> org.json.JSONArray cannot be cast to [Ljava.lang.Object;

I have created a table :
add jar /../xlibs/hive-json-serde-0.2.jar;
CREATE EXTERNAL TABLE SerdeTest
(Unique_ID STRING
,MemberID STRING
,Data ARRAY>
)
PARTITIONED BY (Pyear INT, Pmonth INT)
ROW FORMAT SERDE "org.apache.hadoop.hive.contrib.serde2.JsonSerde";
ALTER TABLE SerdeTest ADD
PARTITION (Pyear = 2014, Pmonth =03) LOCATION '../Test2';
The data in the file :
{"Unique_ID":"ABC6800650654751","MemberID":"KHH966375835","Data":[{"SerialNo":1,"VariableName":"Var1","VariableValue":"A_49"},{"SerialNo":2,"VariableName":"Var2","VariableValue":"B_89"},{""SerialNo":3,"VariableName":"Var3","VariableValue":"A_99"}]}
Select query that I am using:
select Data[0].SerialNo from SerdeTest where Unique_ID = 'ABC6800650654751';
however, when I run this query I get the following error:
java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row [Error getting row data with exception java.lang.ClassCastException: org.json.JSONArray cannot be cast to [Ljava.lang.Object;
at org.apache.hadoop.hive.serde2.objectinspector.StandardListObjectInspector.getList(StandardListObjectInspector.java:98)
at org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:330)
at org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:386)
at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:237)
at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:223)
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:539)
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:157)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:418)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
]
Can anyone please suggest me what am I doing wrong
Few suggestions:
Make sure that all the packages of hive and hive-json-serde-0.2.jar have execute permission for hadoop user.
Hive creates a file called derby.log and metastore_db in the hive directory. It should be allowed to the user invoking the hive query to create files and directories.
Location for data should have / at the end. e.g. LOCATION '../Test2/';
In short, the working JAR is json-serde-1.3-jar-with-dependencies.jar which can be found here. This one is working with 'STRUCT' and can even ignore some malformed JSON. During the creation of the table, include the following code:
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
WITH SERDEPROPERTIES ("ignore.malformed.json" = "true")
LOCATION ...
If needed, it is possible to recompile it from here or here. I tried the first repository and it is compiling fine for me, after adding the necessary libs. The repository has also been updated recently.
Check for more details here.

Resources