What is the alternative of TRUNCATECOLUMNS while inserting instead of copying? - snowflake-cloud-data-platform

I am learning how to use snowflake. As per my understanding TRUNCATECOLUMNS only work for "COPY INTO", is this true? If so what can I use to achieve the same result for "INSERT INTO".
ps. I just want to truncate a column if the size of string is longer than a certain length.

To truncate column LEFT could be used:
CREATE TABLE trg_tab(col_name VARCHAR(20));
INSERT INTO trg_tab(col_name)
SELECT LEFT(col_name, 20)
FROM src_tab;
Moreover, there is no performance difference between VARCHAR(20) and skipping the size VARCHAR.
VARCHAR
If no length is specified, the default is the maximum allowed length (16,777,216).
When choosing the maximum length for a VARCHAR column, consider the following:
Storage: A column consumes storage for only the amount of actual data stored. For example, a 1-character string in a VARCHAR(16777216) column only consumes a single character.
Performance: There is no performance difference between using the full-length VARCHAR declaration VARCHAR(16777216) or a smaller length
Tools for working with data: Some BI/ETL tools define the maximum size of the VARCHAR data in storage or in memory. If you know the maximum size for a column, you could limit the size when you add the column.
Collation: When you specify a collation for a VARCHAR column, the number of characters that are allowed varies, depending on the number of bytes each character takes and the collation specification of the column.

Related

SQL Server string comparison with equals sign and equals or greater in the strings [duplicate]

I have seen prefix N in some insert T-SQL queries. Many people have used N before inserting the value in a table.
I searched, but I was not able to understand what is the purpose of including the N before inserting any strings into the table.
INSERT INTO Personnel.Employees
VALUES(N'29730', N'Philippe', N'Horsford', 20.05, 1),
What purpose does this 'N' prefix serve, and when should it be used?
It's declaring the string as nvarchar data type, rather than varchar
You may have seen Transact-SQL code that passes strings around using
an N prefix. This denotes that the subsequent string is in Unicode
(the N actually stands for National language character set). Which
means that you are passing an NCHAR, NVARCHAR or NTEXT value, as
opposed to CHAR, VARCHAR or TEXT.
To quote from Microsoft:
Prefix Unicode character string constants with the letter N. Without
the N prefix, the string is converted to the default code page of the
database. This default code page may not recognize certain characters.
If you want to know the difference between these two data types, see this SO post:
What is the difference between varchar and nvarchar?
Let me tell you an annoying thing that happened with the N' prefix - I wasn't able to fix it for two days.
My database collation is SQL_Latin1_General_CP1_CI_AS.
It has a table with a column called MyCol1. It is an Nvarchar
This query fails to match Exact Value That Exists.
SELECT TOP 1 * FROM myTable1 WHERE MyCol1 = 'ESKİ'
// 0 result
using prefix N'' fixes it
SELECT TOP 1 * FROM myTable1 WHERE MyCol1 = N'ESKİ'
// 1 result - found!!!!
Why? Because latin1_general doesn't have big dotted İ that's why it fails I suppose.
1. Performance:
Assume your where clause is like this:
WHERE NAME='JON'
If the NAME column is of any type other than nvarchar or nchar, then you should not specify the N prefix. However, if the NAME column is of type nvarchar or nchar, then if you do not specify the N prefix, then 'JON' is treated as non-unicode. This means the data type of NAME column and string 'JON' are different and so SQL Server implicitly converts one operand’s type to the other. If the SQL Server converts the literal’s type
to the column’s type then there is no issue, but if it does the other way then performance will get hurt because the column's index (if available) wont be used.
2. Character set:
If the column is of type nvarchar or nchar, then always use the prefix N while specifying the character string in the WHERE criteria/UPDATE/INSERT clause. If you do not do this and one of the characters in your string is unicode (like international characters - example - ā) then it will fail or suffer data corruption.
Assuming the value is nvarchar type for that only we are using N''

unable to update nvarchar(50) having czech letters in it [duplicate]

I have seen prefix N in some insert T-SQL queries. Many people have used N before inserting the value in a table.
I searched, but I was not able to understand what is the purpose of including the N before inserting any strings into the table.
INSERT INTO Personnel.Employees
VALUES(N'29730', N'Philippe', N'Horsford', 20.05, 1),
What purpose does this 'N' prefix serve, and when should it be used?
It's declaring the string as nvarchar data type, rather than varchar
You may have seen Transact-SQL code that passes strings around using
an N prefix. This denotes that the subsequent string is in Unicode
(the N actually stands for National language character set). Which
means that you are passing an NCHAR, NVARCHAR or NTEXT value, as
opposed to CHAR, VARCHAR or TEXT.
To quote from Microsoft:
Prefix Unicode character string constants with the letter N. Without
the N prefix, the string is converted to the default code page of the
database. This default code page may not recognize certain characters.
If you want to know the difference between these two data types, see this SO post:
What is the difference between varchar and nvarchar?
Let me tell you an annoying thing that happened with the N' prefix - I wasn't able to fix it for two days.
My database collation is SQL_Latin1_General_CP1_CI_AS.
It has a table with a column called MyCol1. It is an Nvarchar
This query fails to match Exact Value That Exists.
SELECT TOP 1 * FROM myTable1 WHERE MyCol1 = 'ESKİ'
// 0 result
using prefix N'' fixes it
SELECT TOP 1 * FROM myTable1 WHERE MyCol1 = N'ESKİ'
// 1 result - found!!!!
Why? Because latin1_general doesn't have big dotted İ that's why it fails I suppose.
1. Performance:
Assume your where clause is like this:
WHERE NAME='JON'
If the NAME column is of any type other than nvarchar or nchar, then you should not specify the N prefix. However, if the NAME column is of type nvarchar or nchar, then if you do not specify the N prefix, then 'JON' is treated as non-unicode. This means the data type of NAME column and string 'JON' are different and so SQL Server implicitly converts one operand’s type to the other. If the SQL Server converts the literal’s type
to the column’s type then there is no issue, but if it does the other way then performance will get hurt because the column's index (if available) wont be used.
2. Character set:
If the column is of type nvarchar or nchar, then always use the prefix N while specifying the character string in the WHERE criteria/UPDATE/INSERT clause. If you do not do this and one of the characters in your string is unicode (like international characters - example - ā) then it will fail or suffer data corruption.
Assuming the value is nvarchar type for that only we are using N''

Length vs Precision

When I describe a table with nvarchar data type, what's the difference between Precision and Length? I see that Length is always the double. For example the values is nvarchar(64), the precision is 64 and the length is 128.
CREATE TABLE T(X nvarchar(64))
EXEC sp_columns 'T'
Precision has no meaning for text data.
As for the Length property, I think you confuse it with the Size reported by SQL Server Management Studio, which is the size of a column in bytes. The Length of an nvarchar(64) column is 64 while Size is 128.
The size of unicode types (nchar, nvarchar) is double the number of characters because Unicode uses two bytes for each character.
You can get these values using the LEN function to get the number of characters and the DATALENGTH function to get the number of bytes, eg.
select len(N'some value'), datalength(N'some value')
which returns 10 20
EDIT
From your comments I see you use sp_columns to get at the table's schema info. You shouldn't use any of the catalog stored procedures and use the catalog views instead.
As the documentation states, catalog stored procedures are used to support ODBC applications, hence their results are limited and may need interpretation, as you found out. sp_columns doesn't differentiate between character and data length for example.
Schema views like those in the INFORMATION_SCHEMA or sys schemas return detailed and unambiguous information. For example, INFORMATION_SCHEMA.COLUMNS returns the character lnegth in CHARACTER_MAXIMUM_LENGTH and byte size in CHARACTER_OCTET_LENGTH. It also includes collation and character set information not returned by sp_columns.
The INFORMATION_SCHEMA views are defined by ISO so they don't include some SQL Server-specific info like whether a column is a computed columns, stored in a filestream or replicated. You can get this info using the system object catalog views like sys.columns
NVARCHAR doesnt have precision. Precision is used for decimal. And length is the character length.
nvarchar [ ( n | max ) ]
Variable-length Unicode string data. n defines the string length and can be a value from 1 through 4,000. max
indicates that the maximum storage size is 2^31-1 bytes (2 GB). The
storage size, in bytes, is two times the actual length of data entered
+ 2 bytes. The ISO synonyms for nvarchar are national char varying and national character varying.
From the source:-
Precision is the number of digits in a number.
Length for a numeric data type is the number of bytes that are used to
store the number. Length for a character string or Unicode data type
is the number of characters
NVARCHAR doesn't have precision, and the length would be the character length. Try the following SQL
SELECT LEN('Whats the length of this string?')
You may be confucing this with a Numeric or Decimal, see the chart here
3 Years later but still relevant.. had a similar issue;
I encountered a UDDT 'd_style', using alt+f1 I see this is a 'nvarchar, length=2, prec=1'.
And indeed, I cannot insert e.g. 'EU' in this field, as data would be truncated.
So length 2 does not mean 2 characters here. This is because each character is saved as 2 bytes, thus prec = length/2.
In this case, 'precision' is indeed the maximum amount of characters allowed.
However, when you create a new table with the 'nvarchar' datatype, you can simply enter the desired length (e.g. my_field nvarchar(2)) if you want to be able to insert example value 'EU'.
Mind that this is all quite 'practical', in theory, precision is only applicable for numeric values, and is the number of digits in a number. So we shouldn't be talking about a 'precision' for nvarchar.
Indeed, this is a confusing topic in SQL server, even the documentation is inconsistent.
For example see the syntax definition of CREATE TABLE
<data type> ::=
[ type_schema_name . ] type_name
[ ( precision [ , scale ] | max |
[ { CONTENT | DOCUMENT } ] xml_schema_collection ) ]
You see "precision", but don't see "length | precision".
And as others pointed out, everywhere else in the documentation, they refer to the same attribute as length, when talking about character datatypes. Except where they don't, like the documentation of sp_help and companion.
Length int Column length in bytes.
Prec char(5) Column precision.
Scale char(5) Column scale.
Here length is the storage length. (Most people would call it size.)
So unfortunately there is no correct or final answer to your question, you must always consider the context, and consult the documentation, when in doubt.

Migrating from sql server to Oracle varchar length issues

Im facing a strange issue trying to move from sql server to oracle.
in one of my tables i have column defined by NVARCHAR(255)
after reading a bit i understod that SQL server is counting characters when oracle count bytes.
So i defined my table in oracle as VARCHAR(510) 255*2 = 510
But when using sqlldr to load the data from a tab delimetered text file i get en error indicating some entries had exiceeded the length of this column.
after checking in the sql server using:
SELECT MAX(DATALENGTH(column))
FROM table
i get that the max data length is 510.
I do use Hebrew_CI_AS collationg even though i dont think it changes anything....
I checked in SQL Server also if any of the entries contains TAB but no... so i guess its not a corrupted data....
Any one have an idea?
EDIT
After further checkup i've noticed that the issue is due to the data file (in addition to the issue solved by #Justin Cave post.
I have changed the row delimeter to '^' since none of my data contains this character and '|^|' as column delimeter.
creating a control file as follows:
load data
infile data.txt "str '^'"
badfile "data_BAD.txt"
discardfile "data_DSC.txt"
into table table
FIELDS TERMINATED BY '|^|' TRAILING NULLCOLS
(
col1,
col2,
col3,
col4,
col5,
col6
)
The problem is that my data contain <CR> and sqlldr expecting a stream file there for fails on the <CR>!!!! i do not want to change the data since its a textual data (error messages for examples).
What is your database character set
SELECT parameter, value
FROM v$nls_parameters
WHERE parameter LIKE '%CHARACTERSET'
Assuming that your database character set is AL32UTF8, each character could require up to 4 bytes of storage (though almost every useful character can be represented with at most 3 bytes of storage). So you could declare your column as VARCHAR2(1020) to ensure that you have enough space.
You could also simply use character length semantics. If you declare your column VARCHAR2(255 CHAR), you'll allocate space for 255 characters regardless of the amount of space that requires. If you change the NLS_LENGTH_SEMANTICS initialization parameter from the default BYTE to CHAR, you'll change the default so that VARCHAR2(255) is interpreted as VARCHAR2(255 CHAR) rather than VARCHAR2(255 BYTE). Note that the 4000-byte limit on a VARCHAR2 remains even if you are using character length semantics.
If your data contains line breaks, do you need the TRAILING NULLCOLS parameter? That implies that sometimes columns may be omitted from the end of a logical row. If you combine columns that may be omitted with columns that contain line breaks and data that is not enclosed by at least an optional enclosure character, it's not obvious to me how you would begin to identify where a logical row ended and where it began. If you don't actually need the TRAILING NULLCOLS parameter, you should be able to use the CONTINUEIF parameter to combine multiple physical rows into a single logical row. If you can change the data file format, I'd strongly suggest adding an optional enclosure character.
The bytes used by an NVARCHAR field is equal to two times the number of characters plus two (see http://msdn.microsoft.com/en-us/library/ms186939.aspx), so if you make your VARCHAR field 512 you may be OK. There's also some indication that some character sets use 4 bytes per character, but I've found no indication that Hebrew is one of these character sets.

What data type use instead of 'ntext' data type?

I want to write a trigger for one of my tables which has an ntext datatype field an as you know the trigger can't be written for ntext datatype.
Now I want to replace the ntext with nvarchar datatype. The ntext maximum length is 2,147,483,647 character whereas nvarchar(max) is 4000 character.
what datatype can I use instead of ntext datatype.
Or are there any ways to write trigger for when I have ntext datatype?
It's better to say my database is designed before with SQL 2000 and it is full of data.
You're out of luck with sql server 2000, but you can possibly chain together a bunch of nvarchar(4000) variables. Its a hack, but it may be the only option you have. I would also do an assesment of your data, and see what the largest data you actually have in that column. A lot of times, columns are made in anticipation of a large data set, but in the end it doesn't have them.
in MSDN i see this :
* Important *
ntext, text, and image data types will be removed in a future version of Microsoft SQL Server. Avoid using these data types in new development work, and plan to modify applications that currently use them. Use nvarchar(max), varchar(max), and varbinary(max) instead.
Fixed and variable-length data types for storing large non-Unicode and Unicode character and binary data. Unicode data uses the UNICODE UCS-2 character set.
and it preferd nvarchar(MAX) , You can see details below :
nvarchar [ ( n | max ) ]
Variable-length Unicode string data. n defines the string length and can be a value from 1 through 4,000. max indicates that the maximum storage size is 2^31-1 bytes (2 GB). The storage size, in bytes, is two times the actual length of data entered + 2 bytes. The ISO synonyms for nvarchar are national char varying and national character varying.

Resources