I'm doing an integration on a community platform called Telligent. I'm using a 3rd-party add-on called BlogML to import blog posts from an XML file (in BlogML format) into my local Telligent site. The Telligent platform comes with many classes in their SDK so that I can programmatically add content, such as blog posts. E.g.
myWeblogService.AddPost(myNewPostObject);
The BlogML app I'm using essentially parses the XML and creates blog post objects then adds them to the site using code like the above sample line. After about 40 post imports I get a SQL error:
Exception Details: System.Data.SqlClient.SqlException:
String or binary data would be truncated.
The statement has been terminated.
I believe this error means that I'm trying to insert too much data into a db field that has a max size limit. Unfortunately, I cannot tell which field this is an issue for. I ran the SQL Server Profiler while doing the import but I cannot seem to see what stored procedure the error is occurring on. Is there another way to use the profiler or another tool to see exactly what stored procedure and even what field the error is being caused by? Are there any other tips to get more information about where specifically to look?
Oh the joys of 3rd-party tools...
You are correct in that the exception is due to trying to stuff too much data into a character/binary based field. Running a trace should definitely allow you to see which procedure/statement is throwing the exception if you are capturing the correct events, those you'd want to capture would include:
SQL:BatchStarting
SQL:BatchCompleted
SQL:StmtStarting
SQL:StmtCompleted
RPC:Starting
RPC:Completed
SP:Starting
SP:Completed
SP:StmtStarting
SP:StmtCompleted
Exception
If you know for certain it is a stored procedure that includes the faulty code, you could do away with capturing #'s 1-4. Be sure you capture all associated columns in the trace as well (should be the default if you are running a trace using the Profiler tool). The Exception class will include the actual error in your trace, which should allow you to see the immediate preceding statement within the same SPID that threw the exception. You must include the starting events in addition to the completed events as an exception that occurs will preclude the associated completed events from firing in the trace.
If you can filter your trace to a particular database, application, host name, etc. that will certainly make it easier to debug if you are on a busy server, however if you are on an idle server you may not need to bother with the filtering.
Assuming you are using Sql 2005+, the trace will include a column called 'EventSequence', which is basically an incrementing value ordered by the sequence that events fire. Once you run the trace and capture the output, find the 'Exception' event that fired (if you are using profiler, the row's it will be in Red color), then you should be able to simply find the most recent SP:StmtStarting or SQL:StmtStarting event for the same SPID that occurred before the Exception.
Here is a screen shot of a profile I captured reproducing an event similar to yours:
You can see the exception line in Red, and the line highlighted is the immediate preceding SP:StmtStarting event that fired prior to the exception for the same SPID. If you want to find what stored procedure this statement is a part of, look for the values in the ObjectName and/or ObjectId columns.
By doing some silly mistakes you will get this error.
if you are trying to insert a string like.
String reqName="Food Non veg /n";
here /n is the culprit.Remove /n from the string to get out of this error.
I hope this will help some one.
Related
I have a stored proc that is called by a .net application and passes an xml parameter - this is then shredded and forms the WHERE section of the query.
So in my query, I look for records with a documentType matching that contained in the XML. The table contains many more records with a documentType of C than P.
The query will run fine for a number of week, regardless of if the XML contains P or C for documentType. Then it stops working for documentType C.
I can run both queries from SSMS with no errors (using profiler to capture the exact call that was made). Profiler shows that when run from the application, the documentType C query starts a statement then finishes before the statement ends, and before completing the outstanding steps of the query.
I ran another profiler session to capture all errors and warnings. All I can see is error 3621 - The statement has been terminated.
There are no other errors relating to this spid, the only other things to be picked up were warnings changing database context.
I've checked the SQL logs and extended events and can find nothing. I don't think the query relates to the data content as it runs in SSMS without problems - I've also checked the range values for other fields in the WHERE clause and nothing unusual or untoward there. I also know that if I drop and recreate the procedure (i.e. exact same code) the problem will be fixed.
Does anyone know how I can trace the error that is causing the 3261 failure? Profiling does not pick this up.
In some situations, SQL Server raises two error messages, one is the actual error message saying exactly what is happening and the other one is 3621 which says The statement has been terminated.
Sometimes the first message get lost specially when you are calling an SQL query or object from a script.
I suggest you to go through each of your SQL statement and run them individually.
Another guess is you have a timeout error on your client side. If you have Attention event on your SQL Server trace, you can follow the timeout error messages.
Is there a way to get the sql_id/child number/plan hash after calling OCIStmtExecute()? I can't see it in OCIAttrGet().
NOTE: As a regular user who can't see v$session - if I can it's as simple as executing select prev_sql_id, prev_child_number from v$session where sid=sys_context('USERENV', 'SID')
Thanks!
There is no means to get the sql_id or the plan_hash_value with oci or sys_context. However it might be a good idea to file an enhancement request with oracle support to add that feature.
There is the possibility to trace all sql statements of a session with the following statement:
alter session set events '10046 trace name context forever, level 12'
Depending on the trace level more or less trace is generated (Level 4 and 8 create less information). To turn off the tracing execute
alter session set events '10046 trace name context off'
The other option is to create a function to compute the sql_id yourself
Use the sql text and calculate a 128bit md5
The lower 64 bit are the sql_id (and if you are interested the lower 32 bits are the plan hash)
Of course this is error prone as oracle might change the mechanism to calculate the sql_id in the future.
The following query is supposed to work but only if it is the very next statement execution following the one that you wish to identify.
select prev_sql_id, prev_child_number
from v$session
where sid = sys_context('userenv','sid')
And it does work...most of the time. My customer wrote a PL/SQL application for Oracle 12c and placed the above query in the part of the code that executes the application query. He showed me output that shows that it sometimes returns the wrong value for prev_child_number. I watched and it is indeed failing to always return the correct data. Over 99 distinct statement executions it returned the wrong prev_child_number 6 times.
I am in the process of looking for existing bugs that cause this query to return the wrong data and haven't found any yet. I may have to log a new SR with Oracle support.
I wrote a script in management studio that uses nested cursors and it inserts data in different tables.
Since many insert statemnets are executed there are many messages like
231 line(s) affected
the problem is that it seems there is a limit for these messages. So after a while they are not displayed anymore.
So if the error happens in the first "cursor loops" I see the error message, but if it happens near the end the error is not displayed, I just see a generic "Query completed with errors".
In my particular case I simply started inserted from the end (so the error came at first and I found the problem.
But how to do better?
Ideally I would like to have an option to log in the messages, just the errors, not the
231 line(s) affected
kind of messages.
Which technique do you suggest?
Add SET NOCOUNT ON at the top to suppress these messages?
Note: this doesn't affect ##ROWCOUNT if you use it
I am just starting to experiment with SQL Server Alerts. I set up an Alert on Errors/sec with a threshold of zero, thinking that I would get an email any time an error was written to the log. I got a lot of emails! I raised the threshold to notify me when it was over one per second, and I still get quite a few emails from time to time.
As an example, I get emails that contains something like this:
DESCRIPTION: The SQL Server performance counter 'Errors/sec' (instance '_Total') of object 'MyServerName:SQL Errors' is now above the threshold of 1.00 (the current value is 4.45).
Here is the command for the alert I am using:
EXEC msdb.dbo.sp_add_alert #name=N'SQL Errors',
#message_id=0,
#severity=0,
#enabled=1,
#delay_between_responses=0,
#include_event_description_in=1,
#notification_message=N'Check the ERRORLOG for details.',
#category_name=N'[Uncategorized]',
#performance_condition=N'MyServerName:SQL Errors|Errors/sec|_Total|>|0',
#job_id=N'00000000-0000-0000-0000-000000000000'
When I look at the log, I don't find any errors. I do find informational messages (a backup completed, etc.) though. Is this alert really "Entries/sec" and not truly "Errors/sec" or am I looking in the wrong place (SSMS | Server | Management | SQL Server Logs | Current) for the actual errors?
All errors are not logged, a insert might be executed that might break a constraint, an error will be raised to the client, but it doesn't mean that it is logged in the error log.
For example if you execute the following t-sql.
RAISERROR ('This is an error!', 16, 1) WITH LOG
the error will be logged in the error log, ommitting the WITH LOG will just cause the error to be raised without logging.
Errors have attributes which you can use to filter on, such as message id, severity, but you are monitor all by the looks of things. Severity might be what you need.
I have read about but never used Alerts. Based on this, I believe the "SQL Alerts system" is based off of data writtin in the Windows Application Event log, e.g.
Start
Programs
Administrative Tools
Event Viewer
and look under the "Application" entry. (Look at the rest of them while you're there, to see all those bugs and errors you never knew about. It's a lot like looking under a rock in the woods...)
I am trying to resolve deadlocks. My Application gets deadlocks all the time when there is more then 10 users at the same time.
I have tried with SQL profiler and can't figure it out.
The thing is, in SQL Profiler I have checked to use the Deadlock Graph Event. But when I run the trace the event never got logged. I can see there are many Deadlocks and Deadlock Chains, but none Deadlock Graph. Please advice.
Thanks for help
You need to have only Locks->Deadlock graph selected if you want to see Deadlock graph event only.
When you run set up a filter for database name or database id, the DeadlockGraph event is not captured, even if you don't check "Exclude rows that don't check values".
If you filter for, say, Duration or NTUserName, which neither are populated by DeadlockGraph, the event is included (as long as you don't filter for the database, that is.)
Likewise, if you add LockAcquired and filter for DatabaseName (not populated by LockAcquired), the event is included.
So the problem is with this precise combination.
Refer:
https://connect.microsoft.com/SQLServer/feedback/details/240737/filtering-for-database-name-id-filters-out-deadlock-graph-when-it-shouldnt