I have data from a flat file that I am loading into netezza via nzload.
Some of the field types are numeric however, the data received can sometimes contain invalid characters.
How can i check to make sure that the data isnumeric in my import?
I saw try_cast for T-Sql but didn't see anything similar in netezza.
Netezza doesn't have an equivalent to try-cast, you can however test if the value is numeric a few different ways. If you have the SQL Extensions Toolkit installed you can use a regex function.
sql_functions.admin.regexp_like(<Column Name>, '^[+-]?[0-9]*[.]?[0-9]*$')
Otherwise you can use the translate function.
translate(<Column Name>,'0123456789','') in ('','.','-','-.')
Related
When I use T-SQL to convert a datetime into dd.mm.yyyy for an csv output using SSIS, the file is produced with a dd-mm-yyyy hh:mm:ss which is not what i need.
I am using:
convert(varchar,dbo.[RE-TENANCY].[TNCY-START],104)
which appears correct in SSMS.
Which is the best way to handle the conversion to be output from SSIS?
Not as simple as i thought it would be.
It works for me.
Using your query as a framework for driving the package
SELECT
CONVERT(char(10),CURRENT_TIMESTAMP,104) AS DayMonthYearDate
I explicitly declared a length for our dd.mm.yyyy value and since it's always going to be 10 characters, let's use a data type that reflects that.
Run the query, you can see it correctly produces 13.02.2019
In SSIS, I added an OLE DB Source to the data flow and pasted in my query
I wired up a flat file destination and ran the package. As expected, the string that was generated by the query entered the data flow and landed in the output file as expected.
If you're experiencing otherwise, the first place I'd check is double clicking the line between your source and the next component and choose Metadata. Look at what is reported for the tenancy start column. If it doesn't indicate dt_str/dt_wstr then SSIS thinks the data type is date variant and is applying locale specific rules to the format. You might also need to check how the column is defined in the flat file connection manager.
The most precise control on output format of the date can be achieved by T-SQL FORMAT(). It is available since SQL Server 2012.
It is slightly slower than CONVERT() but gives desired flexibility
An example:
SELECT TOP 4
name,
FORMAT(create_date, 'dd.MM.yyyy') AS create_date
FROM sys.databases;
name create_date
--------------------
master 08.04.2003
tempdb 12.02.2019
model 08.04.2003
msdb 30.04.2016
p.s. take into account that FORMAT() produces NVARCHAR output, which is different from your initial conversation logic, therefore extra cast to VARCHAR(10)) perhaps will be necessary to apply
I was trying to convert one of my MS Access query containing format to a SQL Server view. I have my view connected to MS Access as linked tables. I was looking at this MS Access to SQL server cheat-sheet to convert Jet-SQL to T-SQL.
The cheat sheet says:
Access: SELECT Format(Value, FormatSpecification) (note: this
always returns a string value)
T-SQL: Do not do this in T-SQL;
format data at your front-end application or report
I cannot format data at my front end because the SQL Server view is linked as linked tables. I cannot have format function in tables.
The cheat sheet does not provide any explanation on why not to do this in T-SQL.
What is the reason behind not using format when converting Jet-SQL to T-SQL?
Obviously, you can format values in T-SQL using the Format function, which only has minor differences with the Access format function.
Generally, though, you shouldn't.
There are multiple reasons why it's discouraged:
Formatted strings are nearly always larger than unformatted dates/numbers, causing additional overhead when transmitting results
If you format in the application layer, the unformatted value is available to you in the application layer to validate/do calculations with/use for conditional formatting/etc. If you format in the data in the database layer, you can't do this without casting back to a date (which is a really bad practice).
If you want variable formatting based on things like locale settings, it's way easier to format in the application layer.
It's certainly not a limitation of SQL Server. It's just a bad practice to use it.
The following SQL statement does work with MSSQL but not with SQLite:
... where Table.Column = N'999';
It seems like nationalized indicators aren't supported in SQLite. Since I try to use MSSQL and SQLite at the same time with the same code base, it would be great to create a universal solution. Is there any syntax difference or configuration I am missing?
In SQLite, all strings always are unicode strings, so a separate NVARCHAR type is not needed.
If you want to use the same syntax in both databases, you have to use an SQL Server codepage that contains all the characters you need.
(But it's likely that you will run into more differences.)
I have a database imported that contains Arabic characters but are displayed as question marks. Through some searching I found that the column data types should be nvarchar to support unicode (they were varchar). I changed one of the columns that contains those characters to nvarchar but the data is still displayed as "??". How can I change the existing values to become unicode and display correctly?
You cannot just simply change the datatype to nvarchar - that won't bring back the data since it's already been "destroyed" by having been converted to a non-Unicode format.
You need to use nvarchar and then you need to insert (or update) the data in such a way that doesn't convert it back to ANSI codes.
If you use T-SQL to insert that Unicode code, make sure to use the N'...' prefix:
INSERT INTO dbo.YourTable(NvarcharCol)
VALUES (N'nvarchar-value')
From a front-end language like C# or PHP or Ruby, make sure to use Unicode strings - .NET (C# and VB.NET) does that automatically. When using queries with parameters, make sure to specify Unicode string types for those relevant parameters.
You need different collation.
Read more here: https://msdn.microsoft.com/en-us/library/ms143508.aspx
I have quite a few tables and I'm using SSIS to bring the data from Oracle to SQL Server, in the process I'd like to convert all varchar fields to nvarchar. I know I can use the Data Conversion transformer but it seems the only way to do this is to set each field one by one, then I'll have to manually set the mapping in the destination component to map to the "Copy of" field. I've got thousands of fields and it would be tedious to set it on each one... is there a way to say "if field is DT_STR convert to DT_WSTR"?
what you can do is, instead of replacing varchar with nvarchar manually before running the script is copy and save all the create table scripts generated by SSIS to a document. Then you can do a global replace nvarchar x varchar in the document.
Use then the amended script as a step in your SSIS package to create the tables before populating them with the data from Oracle.
The proper way is to use the data conversion step...
That said, it appears if you disable external meta data validation in SSIS, you can bypass this error. SQL will then use an implicit conversion to the destination type.
See this SO post for a quick explanation.