CakePHP truncating large varchar columns from SQL Server database - sql-server

Using CakePHP 1.3.11 and SQL Server 2005 and the included MSSQL database driver.
I need to retrieve a varchar(8000) field, however the typical find() queries truncate this field to 256 characters; the actual array value array['comment'] is truncated, so the data beyond character 256 isn't accessed by my application at all.
I tried changing the field to a text datatype and with that change the query returns the full value of the column. Is there a way for cake to read the full value of the column or does it always truncate varchars to 256 characters?

Solution has been to use the text data type on the database side.

Related

NTEXT on SQL Server to NVARCHAR2(2000) on Oracle (ORA-12899: value too large for column)

My source is in SQL Server and the target is Oracle. There are some tables having columns defined NTEXT in SQL Server and I created columns of NVARCHAR2(2000) which allows 4000 bytes, to store the data from the source.
When I pull the data defined NTEXT from SQL Server, I cast and substring the data to fit into 4000 bytes in the target. I'm using Data Stage by IBM to extract the source form SQL Server and the code below performs converting data type to varchar(4000) and extracting a substring with the specified length, 4000 bytes.
cast(substring([text],1,3950) as varchar(4000)) as "TEXT"
However, it often occurs an error ORA-12899 when it inserts into NVARCHAR2(2000) on Oracle which is sized 4000 bytes.
Error message: ORA-12899: value too large for column (actual: 3095, maximum: 2000).
First, it is hard to understand why it occurs the error even though the destination has a column sized 4000 bytes and I cut the data using SUBSTING already.
Second, I was wondering if I miss anything to handle the issue when my team does not consider CLOB on Oracle for those NTEXT type data.
Please help me to resolve this issue. I'm working on many tables and the error occurs often.
nvarchar2 is limited to 2000 chars, which requires 4000 bytes. You have to specify the limit in chars (so, 2000, not 4000). Try changing your function to:
cast(substr([text],1,2000) as nvarchar2(2000)) as "TEXT"

SQL Server 2019 Database with consequences of mixing nvarchar column type, but changing collation to Latin1_General_100_CI_AI_SC_UTF8

We need to store much UTF-8 collated data in XML data type columns and the XML files we store explicitly state encoding='UTF-8', which results in an error when trying to store XML data in the column.
We are now in the process of switching DB collation default to Latin1_General_100_CI_AI_SC_UTF8 from a prior UTF-16 based Latin1_General_100_CI_AI_SC. Do we need to switch all nvarchar columns to varchar in the process? We are not afraid of losing any data and we (probably) do not have anything but a few encoded chars in our data, we are all in 'latin' alphabet. Is it simply going to affect (use 2x size)? Will there be a performance hit on joins? Any other consequences?

SQL Server to Oracle migration - ORA-12899: value too large for column

While migrating a SQL Server database to Oracle, I end up with an error
ORA-12899: value too large for column
though the datatypes are the same.
This is happening with strings like 'enthält'. The data type NVARCHAR(7) should be able to hold the given string in SQL Server where as on Oracle VARCHAR2(7) not able to hold the value and throwing value too large error.
Is this something with the encoding style on Oracle? How can we resolve this?
Thanks
You can create your Oracle table with something like varchar2(7 char) this causes it to allocate in units of characters, not bytes.
This succeeds:
create table tbl(x varchar2(7 char));
insert into tbl values ('enthält');

Data gets truncated while inserting JSON data

I have a SQL table which has 'text' as a datatype. I am trying to insert JSON data using CfQueryparam with cf_sql_longvarchar, which I have in a ColdFusion variable.
However, when I checked the value in my column, I'm losing the data. I compared the total length of my column inSQL table to data being held. I've enough datalength left in that column.
This probably has to do with the settings in your datasource under the CF admin.
I would mess with the CLOB and your char buffer values and see what you can come up with.

what advantage does TEXT have over varchar when required length <8000?

SQL Server Text type vs. varchar data type:
As a rule of thumb, if you ever need you text value to exceed 200
characters AND do not use join on this column, use TEXT.
Otherwise use VARCHAR.
Assuming my data now is 4000 characters AND i do not use join on this column. By that quote, it is more advantageous to use TEXT/varchar(max) compared to using varchar(4000).
Why so? (what advantage does TEXT/varchar(max) have over normal varchar in this case?)
TEXT is deprecated, use nvarchar(max), varchar(max), and varbinary(max) instead: http://msdn.microsoft.com/en-us/library/ms187993.aspx
I disagree with the 200 thing because it isn't explained, unless it relate to the deprecated "text in row" option
If your data is 4000 characters then use char(4000). It is fixed length
Text is deprecated
BLOB types are slower
In old versions of SQL (2000 and earlier?) there was a max row length of 8 KB (or 8060 bytes). If you used varchar for lots of long text columns they would be included in this length, whereas any text columns would not, so you can keep more text in a row.
This issue has been worked around in more recent versions of SQL.
This MSDN page includes the statement:
SQL Server 2005 supports row-overflow storage which enables variable
length columns to be pushed off-row. Only a 24-byte root is stored in
the main record for variable length columns pushed out of row; because
of this, the effective row limit is higher than in previous releases
of SQL Server. For more information, see the "Row-Overflow Data
Exceeding 8 KB" topic in SQL Server 2005 Books Online.

Resources