I have one SQL Server instance accessed by many different applications.
Sometimes happened that one of such application throw the exception : "String or binary data would be truncated".
My objective is to trace (logging) when that error happen, and trace down which application has encountered the problem on which field.
I have no access to every application's code, so my first idea is to develop a solution directly in the SQL Server, but i don't know how can i check if that problem is occured and on which field.
but i don't know how can i check if that problem is occured and on which field.
even SQL server won't tell you on which field it occurred.There is a connect item ,which has been logged for the same
https://connect.microsoft.com/SQLServer/feedback/details/339410/please-fix-the-string-or-binary-data-would-be-truncated-message-to-give-the-column-name
But you can catch those errors,with a simple try catch and log them
create table #t1
(
charcol char(1)
)
begin try
insert into #t1
values
('a'),
('aa')
end try
begin catch
select error_message()
end catch
This is fixed in Recent versions of SQLServer..Now you will be able to know the exact column
Msg 2628, Level 16, State 1, Line 9
String or binary data would be truncated in
table 'StackOverflow2013.dbo.CoolPeople', column 'PrimaryCar'.
Truncated value: '2006 Subaru Impreza '.
This works in SQL Server 2019 if you enable database scoped settings like below
ALTER DATABASE SCOPED CONFIGURATION
SET VERBOSE_TRUNCATION_WARNINGS = ON;
you have to turn traceflag 460 for SQL Server 2016-2017
References and Examples:
https://www.brentozar.com/archive/2019/03/how-to-fix-the-error-string-or-binary-data-would-be-truncated/
Related
I did search for exception 208 (invalid object name) identified by SQL Profiler and found many hints regarding "Deferred Name Resolution". Nevertheless I still did not find an asnwer related to TVP (table valued parameter) and SQL Server 2019.
I´m going to migrate an application from SQL Server 2016 to SQL Server 2019 and I´m wondering why Profiler is starting throwing of hundrets of exceptions per hour with same code. It turned out that all of them are Exception 208 (invalid object name).
Steps for reproduction:
Set compatbility level of database to 140 (2017);
SSMS as well as Profiler runs without any exception.
Set compatbility level of database to 150 (2019);
SSMS runs fine but SQL Profiler throws exception 208 (invalid object name).
Is there any way to get rid of this? If I´m looking for any unexpected exceptions in database I get blind due to that many useless exceptions.
--ALTER DATABASE [...] SET COMPATIBILITY_LEVEL = 140;
ALTER DATABASE [...] SET COMPATIBILITY_LEVEL = 150;
GO
CREATE TYPE dbo.LocationTableType AS TABLE ( LocationName VARCHAR(20) );
GO
DECLARE #LocationTVP AS dbo.LocationTableType;
SELECT * FROM #LocationTVP; -- Throws the exception in profiler
--INSERT INTO #LocationTVP (LocationName) SELECT 'MyLocation'; -- Throws the exception in profiler
GO
DROP TYPE dbo.LocationTableType;
GO
Either INSERT or SELECT statement is throwing an exception. Could anyone let me know how to turn this off in SQL-Server 2019 to be able to further use of SQL profiler.
A Java Hibernate application call my stored procedure in SQL Server 2019, through JDBC
The stored procedure has a begin/catch block
Despite I catch the error, the error is reported back to Java as an Exception
and then the application finishes with an error
From the database (and business) point of view, the error was caught, and everything is fine
I need Java application to not trigger an exception, since the error was correctly treated by the stored procedure
Is there any configuration or something (in the SP) I can add?
NOTE: I'm on the DB side, the Java code is not in my scope, it can't be modified
ADDED
I mean, whatever kind of error, for example:
CREATE PROCEDURE SP_TEST
AS
BEGIN TRY
SELECT 1/0 as x
PRINT 'no problem'
END TRY
BEGIN CATCH
SELECT 1 as x
PRINT 'don´t pay attention'
END CATCH
PRINT 'good bye'
return 0 -- always 0 = OK
Despite the error was managed by the Stored Procedure
in some way the client application receives it
I really don't know if it is normal to the driver or the Engine
I'm trying to run this trigger that is suppose to update a row in my customers table after a row is updated in the same table.
CREATE TRIGGER [project].updateFreeShipping
ON [project].[customers]
AFTER UPDATE
AS
BEGIN
UPDATE [project].[customers]
SET NextShippingIsFree = 1
WHERE Email IN (SELECT TOP 5 Email FROM dbo.Top10byMoney)
END
It throws the following error:
Msg 8197, Level 16, State 4, Procedure updateFreeShipping, Line 2
The object 'project.customers' does not exist or is invalid for this operation
Screenshot:
This is a clear and classical error that is raised when you execute a statement in another database that the one you usually use. Very often, beginners does not verify the contexteual database and try to create objets in master which is the default one in SSMS. Remember that SQL Server is a multi-database multi-schema RDBMS and does not require any DBlink to execute SQL scripts from one DB to another... So make a great attention to which database you contextually use !
To avoid such trouble, begin your script with the USE statement, like :
USE MyDatabase;
GO
CREATE TRIGGER [project].updateFreeShipping
ON [project].[customers]
AFTER UPDATE
AS
...
I agree to all others comments about the logic of your code which has no sense...
Got a request to change comment field max size in application. Before had it set to varchar(500), so after reading documentation i have decided to change data type of the field from varchar(500) to varchar(max). Database accepted changes without any problems (using Microsoft SQL Server Management Studio 2005 and Microsoft SQL Server Management Studio 2008 for database management).
Then i went on changing the software. Software is written in Delphi with RemObjects to communication with database. So I changed the TDASchema for the server, it mapped my new varchar(max) field as String(65536) data type (got me a little worried there about such an explicit static size, but I went on). Then I Retrieved DataTable Schema for my TDAMemDataTable object, which updated all the fields.
I started the application and decided to see whether my database will accept changes on this specific changed field. I have edited one of the records and clicked the button to synchronize the DataSet with server and got such a fail message:
The data types varchar(max) and text are incompatible in the equal to operator
I interpret it as that my server object (the one that maps database fields with RemObjects objects) have mapped field data types to wrong data types in RemObjects.
How can this be resolved? What are the alternatives?
P.S. In this release Build .1267 logs from RemObjects it clearly states that:
fixed: DataSnap: fails to post updates to MSSQL 2005 VARCHAR(MAX)
I am using build version .1067. Wonder if update will fix the problem
P.P.S. After update to the latest version of RemObjects, the problem persists.
This error message usually happens when trying to compare a varchar(n) and text using an equality operator (usually in a where clause in sql but possible elsewhere). there was an article on MSDN which covered a few points which might relate to this.
when you store data to a VARCHAR(N) column, the values are physically stored in the same way. But when you store it to a VARCHAR(MAX) column, behind the screen the data is handled as a TEXT value. So there is some additional processing needed when dealing with a VARCHAR(MAX) value. (only if the size exceeds 8000)
You mentioned that the TDASchema had mapped your new field as String(65536) which, although never having used RemObjects before, i would assume somewhere in it's own code (or yours) is trying to do a comparison of some kind hence the error message.
Try using VARCHAR(8000) instead of MAX and see if that fixes the issue.
The other option if you can find where in the code it is doing this equality check, is to try doing a cast()
As you suspected, I think the root of your problems is that the fields haven't come into the TDASchema as the correct types. I've just tried it here and varchar(max) and nvarchar(max) fields come through to my schema as Memo and WideMemo respectively, not String(65536).
I'm using Delphi XE6 and SQL Server 2008 R2 via FireDAC.
This suggests an issue retrieving the metadata from the database. What database driver are you using? Can you try FireDAC (if available) or another driver to see if the problem persists?
Resolution for Delphi 7 and MS SQL Server 2008 R2 (SP2)
Delphi:
with TADOStoredProc.Create(Self) do
try
Connection := AConnection;
ProcedureName := ASPName;
Parameters.Refresh;
Parameters.ParamByName('#XML').Value := AXML;
try
ExecProc;
...
MS SQL Server:
ALTER PROCEDURE dbo.StoredProcName
#XML NVARCHAR(MAX)
,#ErrMsgOut NVARCHAR(MAX) = NULL OUT
AS BEGIN
SET NOCOUNT ON
DECLARE #RETURN INT = 0
,#idoc INT
BEGIN TRY
-- Prepare XML
DECLARE #XML_TEXT VARCHAR(MAX)
SET #XML_TEXT = CONVERT(VARCHAR(MAX), #XML)
EXEC sp_xml_preparedocument #idoc OUTPUT, #XML_TEXT
-- Open XML
SELECT *
FROM OPENXML (#idoc, '/ServicesList/ServicesItem', 2)
WITH
(
YourFields AndTypes
)
...
I am using a dataset to insert data being converted from an older database. The requirement is to maintain the current Order_ID numbers.
I've tried using:
SET IDENTITY_INSERT orders ON;
This works when I'm in SqlServer Management Studio, I am able to successfully
INSERT INTO orders (order_Id, ...) VALUES ( 1, ...);
However, it does not allow me to do it via the dataset insert that I'm using in my conversion script. Which looks basically like this:
dsOrders.Insert(oldorderId, ...);
I've run the SQL (SET IDENTITY_INSERT orders ON) during the process too. I know that I can only do this against one table at a time and I am.
I keep getting this exception:
Exception when attempting to insert a value into the orders table
System.Data.SqlClient.SqlException: Cannot insert explicit value for identity column in table 'orders' when IDENTITY_INSERT is set to OFF.
Any ideas?
Update
AlexS & AlexKuznetsov have mentioned that Set Identity_Insert is a connection level setting, however, when I look at the SQL in SqlProfiler, I notice several commands.
First - SET IDENTITY_INSERT DEAL ON
Second - exec sp_reset_connection
Third to n - my various sql commands including select & insert's
There is always an exec sp_reset_connection between the commands though, I believe that this is responsible for the loss of value on the Identity_Insert setting.
Is there a way to stop my dataset from doing the connection reset?
You have the options mixed up:
SET IDENTITY_INSERT orders ON
will turn ON the ability to insert specific values (that you specify) into a table with an IDENTITY column.
SET IDENTITY_INSERT orders OFF
Turns that behavior OFF again and the normal behavior (you can't specify values for IDENTITY columns since they are auto-generated) is reinstated.
Marc
You want to do SET IDENTITY_INSERT ON to allow you to insert into identity columns.
It seems a bit backwards, but that's the way it works.
It seems that you're doing everything right: SET IDENTITY_INSERT orders ON is the right way on SQL Server's side. But the problem is that you're using datasets. From the code you've provided I can say that you're using typed dataset - the one that was generated in Visual Studio based on the database.
If this is the case (most likely) then this dataset contains a constraint that does not allows you to set values for orderId field, i.e. it's the code that does not allow specifying explicit value, not SQL Server. You should go to dataset designer and edit properties of orderId field: set AutoIncrement and ReadOnly to false. But the same changes can be performed in run time. This will allow you to add a row with explicit value for orderId to a dataset and later save it to SQL Server table (you will still need SET IDENTITY_INSERT).
Also note that IDENTITY_INSERT is a connection-level setting so you need to be sure that you're executing corresponding SET exactly for the same connection that you will be using to save your changes to the database.
I would use Profiler to determine whether your SET IDENTITY_INSERT orders ON;
is issued from the same connection as your subsequent inserts, as well as the exact SQL being executed during inserts.
AlexS was correct, the problem was the Insert_Identity worked, but it is a connection level setting, so I needed to set the Insert_Identity within a transaction.
I used Ryan Whitaker's TableAdapterHelper code
and I created an update command on my tableadapter that ran the Identity_Insert. I then had to create a new Insert command with the Identity column specified. I then ran this code
SqlTransaction transaction = null;
try
{
using (myTableAdapter myAdapter = new myTableAdapter())
{
transaction = TableAdapterHelper.BeginTransaction(myAdapter);
myAdapter.SetIdentityInsert();
myAdapter.Insert(myPK,myColumn1,myColumn2,...);
}
transaction.Commit();
}
catch(Exception ex)
{
transaction.Rollback();
}
finally
{
transaction.Dispose();
}
In case that you still have problems with "insert_identity" , you can try to use a complete insert statement like:
insert into User(Id, Name) values (1,'jeff')