When loading a .csv file into BigQuery with dates with this format DD/MM/YY it doesn't work, if I specify the schema for the table and I select Date Format.
However, if I don't specify the schema and I choose Automatically detect it works and converts the date format into YYYY-MM-DD.
Is there any possibility of convert the date into the right format manually and specify the name for that field?
Thanks,
David
Unfortunatelly, there is no way to control date formatting from the load API. You can load data into STRING first, and then use Standard SQL's PARSE_DATE function to parse it using any custom format.
Related
There is a createdDate field in Datetime standard Salesforce format for case I want to change it to dd-mm-tyyyy format actually I am showing list of case in flow for that I have created an apex class now it is default format that is yyyy-mm-dd so I need to change it dd-mm-yyyy
Change the user’s locale to one that uses the desired date format. The Salesforce UI renders date time values in the time zone and locale format of the current user.
It’s generally not a good idea or a best practice to implement custom date rendering; there’s a wide variety of ways to get it wrong.
Our business would be providing us a .csv file. One of the columns in the file would be in date format. Now as we know there are many date formats in Excel. The problem is that we need to check whether the date provided is a correct date. It could be in any format like ddmmyyyy, yyyymmdd, dd-mon-yyyy etc basically any format that Excel supports.
We are planning to first load the data in a staging area and the date field would be defined as varchar so that it can accept any data.
Now either using SSIS or via T-SQL, I need to check whether the date provided is actually a date and if it is I need to load it into a different table in YYYYMMDD format.
How do I go about doing the above?
Considering you have your excel data already loaded into a SQL Server table as varchar (you can easily do this using SSIS), something like this would work:
SELECT
case when ISDATE(YOUR_DATE) = 1 then CONVERT(int,YOUR_DATE,112) else null end as MyDate
FROM
YOUR_TABLE
I don't have access to a SQL Server instance at the moment and can't test the code above, so you may need to adapt to your needs, but this is the general idea.
You can also do further research on ISDATE and CONVERT functions in SQL Server. You should be able to achieve what you need combining them together.
I created a "Date" field and its format defaulted to MM/DD/YYYY (English). I changed the Date settings in "/admin/config/regional/date-time" to use DD/MM/YYYY and added another "Date" field, which uses that format. Now I have two fields with two different date formats and I have no clue how to set them both to DD/MM/YYYY.
Any idea ?
I believe your best practice would be:
converting your corresponding database tables
exporting these tables
deleting this field form your site
running cron
creating this field from scratch and setting it to use your new format
overwriting your corresponding tables with the converted (original) tables in DB
You need to convert your tables after you have saved them. You can not change them once you have them, only conversation works normally.
I have a csv file that need to import using CSVREAD. The issue is that it has date time format in a different way so a need to parse it.Can some one give me an example of how i am supposed to do it?
I tried: merge into MESSAGE (MESG_DATE_FROM,MESG_DATE_TO,MESG_DISPLAY_SEQ,MESG_TIME_DELAY,MESG_ID,REASONTYPE_MAJOR) SELECT * FROM CSVREAD('MESSAGE_0.csv');
This querys are made programatically so they cannot be handmade. The problem is that some columns are datetime type and are in a different datetime format from the one in H2, at the moment of parsing i have no precise way of determining which columns will be datetime so i cannot easily put a PARSEDATETIME and the CSV file doesnt contains any column names or info, just the values. Like this:
2011-11-18 00.00.00.00,2030-12-31 00.00.00.00,1,20000,1,0,
...
...
An Sql file will be generated to load this CSV to each table but it seems i need to know if a column is of TIMESTAMP type to add PARSEDATETIME(MESG_DATE_FROM,'yyyy-mm-dd hh.mm.ss.uu') as MESG_DATE_FROM to the sql.
In DB2 we could use a timestampformat=YYYY-MM-DD HH.MM.SS.UU in the merge query so this was done in a default way for tables. Is there anythis similar in H2?
I think what you want is not possible. There is no way to change the "global" timestamp format in H2 (as you can do in DB2).
I think you will need to either
construct the query based on the column types of each column, using PARSEDATETIME where appropriate, or
modify the CSV file to use the same timestamp format as H2 uses, which is the format defined in the JDBC specification, under SQL escape sequence: yyyy-mm-dd hh:mm:ss[.f...], or
provide a patch for H2 to support the "global" timestamp format.
In informatica i receive dates from flat files in the format of dd-mm-yyyy and dd/mm/yyyy i need to convert all date to one format i.e dd-mm-yyyy using any expression and push into target ,so no rows gets rejected.how to proceed with them?
If you are handling data as "date" data type, you should have no problems with respect to rejections.
About handling a particular date format string, you can look at the properties at the integration service level, there you can control the date format (default).