We are using Sybase/ODBC how to deal with disconnects while running long batch SQL queries? - database

We are developping an application in C# that uses ODBC and the "Adaptive server enterprise" driver to extract data from a Sybase DB.
We have a long SQL batch query that create a lot of intermediate temporary tables and returns several DataTable objects to the application. We are seeing exceptions saying TABLENAME not found where TABLENAME is one of our intermediate temporary tables. When I check the status of the OdbcConnection object in the debugger it is Closed.
My question is very general. Is this the price you pay for having long-running complicated queries? Or is there a reliable way to get rid of such spurious disconnects?
Many thanks in advance!

There's a couple of ODBC timeout parameters - see SDK docs at:
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc20116.1550/html/aseodbc/CHDCGBEH.htm
Specifically CommandTimeOut and ConnectionTimeOut which you can set accordingly.
But much more likely is you're being blocked or similar when the process is running - maybe ask your DBA to check your query plan for the various steps in your batch and look for specific problem areas such as tablescans etc which could be masking your timeout issue.

Related

Is there a way to check the DB2 sql log for actual SQL operations executed into it?, ie how many rows were fetched etc?

I am using a DB2 v10.5, and I am pushing messages into the database I created using a gateway. Is there a way to check the DB2 sql logs for actual SQL operation executed?, ie how many rows were fetched etc? While googling, I find these logs inside the DB2 server in the DIAGPATH /db2/db2inst1/sqllib/db2dump/ but I don't see any SQL messages in there.
I have been checking DB2 guides as well but any ideas to help me on this is greatly appreciated. Thank you.
Activity event monitoring
Briefly:
It acts like "logger" for executed statements. The information is written to the tables of such an event monitor for sessions with such a "logging" enabled.
There is also the package cache. This holds aggregate metrics for all executions of a statement that are still in the package cache (entries get evicted from the cache as newer statement arrive). MON_GET_PKG_CACHE_STMT
You can also use the Db2 Database Management Console which is
A new browser-based console that helps you administer, monitor, manage and optimize the performance of IBM Db2 for Linux, UNIX and Windows databases.
and which itself collects data via functions such as MON_GET_PKG_CACHE_STMT and Activity Event Monitors

Asynchronous Triggers in Azure SQL Database

I'm looking to implement some "Asynchronous Triggers" in Azure SQL Database. There was this other question that asked this same question with pretty much the same needs as mine but for SQL Server 2005/2008. The answer was to use the Service Broker. And it's a great answer that would serve my needs perfectly if it was supported in Azure SQL Databases, but it's not.
My specific need is that we have a fairly small set of inputs selected and stored by a user. A couple of those inputs are identifications of specific algorithms and some aggregate-level data all into a single record of a single table. Once saved, we want a trigger to execute those selected algorithms and process the aggregate-level data to break it down into tens of thousands of records into a few different tables. This 2-8 seconds to process, depending on the algorithms. (I'm sure I could optimize this a bit more but I don't think I can get this faster than 2-5 seconds just because of the logic that must be built into it.)
I am not interested in installing SQL Server inside a VM in Azure - I specifically want to continue using Azure SQL Database for many reasons I'm not going to get into in this post.
So my question is: Is there a good/obvious way to do this in Azure SQL Database alone? I can't think of one. The most obvious options that I can see are either not inside Azure SQL Database or are non-starters:
Use real triggers and not asynchronous triggers, but that's a problem because it takes many seconds for these triggers to process as it crunches numbers based on the stored inputs.
Use a poor-man's queueing system in the database (i.e. a new table that is treated as a queue and insert records into it as messages) and poll that from an external/outside source (Functions or Web Jobs or something). I'd really like to avoid this because of the added complexity and effort. But frankly, this is what I'm leaning towards if I can't get a better idea from the smart people here!
Thanks for the help!
(I am posting this here and not on DBA.StackExchange because this is more of an architectural problem than a database problem. You may disagree but because my current best option involves non-database development and the above question I referenced that was almost perfect for me was also located here, I chose to post here instead of there.)
As far as I know, it's not possible to do directly in Azure SQL Database, but there are a few options:
As #gotqn mentioned in a comment, you can use Azure Automation/Runbooks; applied to Azure SQL specifically.
You can also check out database jobs.
You can use LogicApps. It has a SQL connector that implements an asynchronous trigger...
https://azure.microsoft.com/en-us/services/logic-apps/

SQL Server 2005- Investigate what caused tempdb to grow huge

The tempdb of my instance grew huge eating up all the available disk space and causing applications to go down. Had to restart the instance in emergency. However, I want to investigate and dig deep as to what caused the temp db to grow huge all of sudden. What were the queries, processes that casued this? Can someone help me to pull the required info. I know I wont get much of historical Data from the SQL serevr. I do have the Idera SQL Diagnostic Manager(third party tool) deployed. Any help to use the tool would be really appreciated.
As for postmortem analysis, you can use the tools already installed on your server. For future proactive analysis, you can use SQL traces directly in SQL Profiler, or query the traces using SQL statements.
sys.fn_trace_gettable
sys.trace_events
You can also use an auditing tool that tracks every event that happened on a SQL Server instance and databases, such as ApexSQL Comply. It also uses SQL traces, configures them automatically,and processes captured information. It tracks object and data access and changes, failed and successful logins, security changes, etc. ApexSQL Comply loads all captured information into a centralized repository.
There are several reasons that might cause your tempdb to get very big.
A lot of sorting – if this requires more memory than your sql server has then it will store all temp results in tempdb
DBCC commands – if you’re frequently running commands such as DBCC CheckDB this might be the cause. These functions store its results in temp db
Very large resultsets – these are also using temp db to run properly
A lot of heavy transactions such as bulk inserts
Check out this article for more details http://msdn.microsoft.com/en-us/library/ms176029.aspx on how to troubleshoot this.
AK2,
We have Idera DM tool as well. If you know the time frame around what time your tempdb was used heavily you can go to History on the Idera tool to see what query was running at that time and what lead to the server to hose... On the "Tempdb Space used OverTime" you would usually see a straight line or a graph but at the time of heavy use of tempdb there's a pike and a straight drop. Referring to this time-frame you can check into Sessions>Details too see the exact query and who was running the query.
In our server this happens usually when there is a long query doing lots of join. or when there is an expensive query involving in dumping into temp table / table variable.
Hope this will help.
You can use SQL Profiler. Please try the link below
Sql Profiler

Application Hangs on SQL Server - restart required every time

We have an application which has a SQL Server 2000 Database attached to it. After every couple of days the application hangs, and we have to restart SQL Server service and then it works fine. SQL Server logs show nothing about the problem. Can anyone tell me how to identify this issue? Is it an application problem or a SQL Server problem?
Thanks.
Is it an application problem or a SQL Server problem?
Is it possible to connect to MS SQL Server using Query Analyzer or another instance of your application?
General tips:
Use Activity Monitor to find information about concurrent processes, locks and resource utilization.
Use Sql Server Profiler to trace server and database activity, to capture and save data to a table or file to analyze it later.
You can use Dynamic Management Views (\Database name\Views\System Views folder (in the Management Studio)) to get more detailed information about MS SQL Server internals.
If you have the problems with perfomance (not your case) - you can use Perfomance Monitor and Data Collector Sets to gather perfomance information
Hard to predict the issue, I will suggest you to check your application first.Check what all operations you are performing against data base, are you taking care of connection pooling, unused open connections can create issues.
Check if you can get any log from your application. Without any log information hardly we can suggest anything.
Read this
Application may be hanging due to Deadlock
check the SP runs at that time using Profiler
and check the table manipulation(use nolock),
check the buffer size and segregate the DB into two or three module.

How do I determine the query that sp_cursorfetch is using

Profiler shows my server is overloaded by lots of calls to sp_cursorfetch, but I want to know which queries are causing all this traffic.
Profiler won't work in this case.
I ran one to test it out, and queried the table I created from it with this:
select CPU, TextData FROM cpu where LoginName = 'db_name_here' order by CPU desc
// Be sure to replace db_name_here
Result I got showed stuff like this:
CPU----TextData-----------------------------
0------exec sp_cursorfetch 180150000, 16, 7415, 1
*Note: The "-" above are just to format it so it's actually readable on this site.
========
The only answers I found on this are:
Select statements are the only cause of these cursor fetches, and examining your indexes of most commonly used tables is a good 1st start to resolving the problem
You maybe able to filter a trace on the SPID of the cursorfetch call to see what it's doing before and after the sp_cursorfetch is ran.
Only fetch a subset of the total RecordSet you are currently. Say you grab 100 rows now. Only grab 10,because 10 is the most the user can see at any given time.
In response to the comment:
Thanks for your suggestions, all of which are helpful. Unfortunately the queries in question are from a third party application, of which I do not have direct access to view or modify the queries. If I find a query that is a particuar problem, I can submit a support request to have it reviewed. I just need to know what the query is, first. – Shandy Apr 21 at 8:17
You don't need access to the application to try out most of my aforementioned recommendations. Lets go over them:
Select statements are the only cause of these cursor fetches, and examining your indexes of most commonly used tables is a good 1st start to resolving the problem
This is done on the database server. You need to just run a tuning profile on the database, and to run the SQL Tuning Advisor using the profile generated. This will assist with improving indexes
You maybe able to filter a trace on the SPID of the cursorfetch call to see what it's doing before and after the sp_cursorfetch is ran.
This is something you do using SQL profiler as well
Only fetch a subset of the total RecordSet you are currently. Say you grab 100 rows now. Only grab 10,because 10 is the most the user can see at any given time.
This is done at the application level
What SQL server version are you running on? The resolution for this ended up being an upgrade to SQL Server 2008 in my case. I would try this out to see where it goes.
Since you don't have access to the application, getting around cursor use is going to be a problem most likely. If you take a look at http://sqlpractices.wordpress.com/2008/01/11/performance-tuning-sql-server-cursors/ you can see that most alternatives involve editing the application queries ran.
What is the real problem? Why are you profiling the database?
You might need to use profiler for this.
I'm not sure what you are trying to achieve, if you are doing a batch process execution plan might be helpful.
Hope it helps :)

Resources