TimeoutException in SQL Server 2005 migrated from SQL Server 2000 - sql-server

I've recently updated an MSSQL server from 2000 version to 2005, to make use of UDFs and tweak some results in a system. The thing is that we don't have the source code.
So, I replaced the SQL version, and every worked fine... except when we have to do a large query. I'm getting this error:
System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
I've searched about it, and what I get is that it use to be a CommandTimeout issue that I have to solve programmatically, as it is supposed to be in the client side, but this is weird, because it always worked even with big queries.
My guess is that is not something Client Side because in SQL Server 2000 worked fine.
Is there any way to remove any kind of timeout? The system is completely internal and only a few people uses it, so there's no risk of outages... I prefer a query running forever that this annoying messages.
Thanks in advance!

Have you updated all statistics after the upgrade?
How to: Upgrade to SQL Server 2005 (Setup)
After upgrading the Database Engine to SQL Server 2005, complete the
following tasks:
...
Update statistics - To help optimize query performance, we recommend
that you update statistics on all databases following upgrade. Use the
sp_updatestats stored procedure to update statistics in user-defined
tables in SQL Server 2005 databases.
Update usage counters - In earlier versions of SQL Server, the values
for the table and index row counts and page counts can become
incorrect. To correct any invalid row or page counts, we recommend
that you run DBCC UPDATEUSAGE on all databases following upgrade.

You can set the timeout on the connection either in your config file or at time of setting connection.
SqlCommand1.CommandTimeout = 400000

Related

How does SQL deal with a long running query via Linked Server if there’s a PC reboot?

I have a SQL Server database and have a linked server connection to an Oracle DB.
I had the following query running on my SQL Server:
INSERT INTO dbo.my_table_on_sql_server
SELECT *
FROM OPENQUERY (linkedservername, ‘SELECT * FROM target_table’)
The target_table has 50 million rows and I'm aware the query takes time to execute but has successfully completed before.
This time though, my PC had an automatic restart in the middle of the query. SSMS 2017 automatically reopened as soon as the PC fired back up, but I could not longer see the query running. my_table_on_sql_server has no data.
I'd like to understand what happens in SQL Server in the event of such a situation. Am I correct in assuming that the query was killed / rolled back? Is there any query running in the background? I've seen some related answers on this forum but wanted to specifically understand this for linked servers, as I use them a lot to retrieve data from other DBs for my job.
I'm more concerned about the Oracle DB as I don't want my query to impact any performance upstream. I only have a read-only access permission to the Oracle DB.
Thank you!
On shutdown the query will be aborted, and the INSERT rolled back. The rollback may happen during shutdown, or after restart, and may take some time to complete.
There's no automatic retry or anything that will access the linked server Oracle after the shutdown.

Facing slowness in database server after migrating from SSMS 2008 to SSMS 2016

We have a RDP server which is running in 2008 version of SSMS and OS. Recently we migrated this server to 2016 version, both the OS(2016) and SSMS(2016).
The configured new machine(with ssms2016) is same to the old one(with ssms2008) in terms of system configuration. It has a 64-bit OS with x64-based processor. RAM memory is 64.0 GB and 2.39 GHz (32 processor).
We are facing severe performance issue while running stored procedures in SSMS 2016 version, as the same code base has been migrated from sql server 2008.We are loading data to these servers using SSIS ETL tools.
For example if we are running a stored procedure in old server(with ssms2008) it is taking 1 hour of time to complete the execution but the same SP is taking 10 hours of time in new server(with ssms 2016) sometimes even more.
To identify the root cause we have tried below approaches but so far nothing worked.
• After migration we changed the compatibility level of sql server from 2008 to 2016.
• Restored the database once again from old server (ssms 2008) to new server (ssms 2016 ) without changing the compatibility level.
• Recompiled the stored procedures in new server(ssms2016).
• updated the statistics in new server(ssms2016).
• Done the disc reconfiguration/thickening of windows new server drives also.
• While running time consuming stored procedures in new server(ssms2016), parallely ran sql server
profiler to identify the issue, but couldn't find anything
• Ran parallelly same query in ssms 2008 and ssms 2016 servers at the same time, in old
server(ssms2008) execution got completed much faster than new server(ssms2016)
Note: Is there any solution we can try to get the same execution time in both servers.
Thanks in Advance
Bala Muraleedharan
I'm going to assume the SQL server version got updated too, as SSMS version would not make any difference.
It's impossible to tell of course, but query execution times can be drastically effected by the use of the new cardinality estimator in SQL Server 2014 and above. 99% of the time, things run faster, but once in a while, the new CE gets crushed. Add this line to the stored procedure to run with the old 2008 CE and see if it makes any difference.
OPTION(QUERYTRACEON 9481);
This problem may have two reasons:
1- Control the setting of your SQL server. Specifically limit maximum server memory to 60% of your RAM and increase the number of your tempdb(system database) files to reach to your CPU cores number.
2- Check your SP syntax. If you are using table type variable(#Table) change them to temp table(#Table).

Is it possible to limit the maximum duration of queries in SQL Server 2000?

Specifically I'd like to detect when a query has been executing for over 5 minutes and then cause it to rollback, I could no doubt do this at an application level but am investigating if SQL Server has any built in mechanism to do this for me.
Note, for clarification, I'm sadly still running SQL Server 2000.
In the discussion of this question How do I set a SQL Server script’s timeout from within the script? I have found reference to There's no such thing as a query timeout... article which says what "query timeouts are a client-side concept only".
You could consider using
SET QUERY_GOVERNOR_COST_LIMIT value
command, but unfortunately it isn't exactly what are you looking for.
As documentation says it disallows execution of any query
that has an estimated elapsed time, in seconds, required to complete the query on a specific hardware configuration.
In SQL Server Management Studio, goto Tools > Options > Query Execution.

Does running a SQL Server 2005 database in compatibility level 80 have a negative impact on performance?

Our software must be able to run on SQL Server 2000 and 2005. To simplify development, we're running our SQL Server 2005 databases in compatibility level 80. However, database performance seems slower on SQL 2005 than on SQL 2000 in some cases (we have not confirmed this using benchmarks yet). Would upgrading the compatibility level to 90 improve performance on the SQL 2005 servers?
I think i read somewhere, that the SQL Server 2005 database engine should be about 30% faster than the SQL Server 2000 engine. It might be, that you have to run your database in compatibility mode 90 to get these benefits.
But i stumbled on two scenarios, where performance can drop dramatically when using mssql 2005 compared to mssql 2000:
Parameter Sniffing: When using a stored procedure, sql server will calculate exactly one execution plan at the time, you first call the procedure. The execution plan depends on the parameter values given for that call. In our case, procedures which normally took about 10 seconds are running for hours under mssql 2005. Take a look here and here.
When using distributed queries, mssql 2005 behaves different concerning assumptions about the sort order on the remote server. Default behavior is, that the server copies the whole remote tables involved in a query to the local tempdb and then execute the joins locally. Workaround is to use OPENQUERY, where you can control exactly which resultset is transferred from the remote server.
after you moved the DBs over to 2005 did you
update the stats with full scan?
rebuilt the indexes?
first try that and then check performance again
Also a FYI, if you run compatibility level 90 then some things are not supported anymore like old style outer joins (*= and =*)
Are you using subselects in your queries?
From my experience, a SELECT statement with subselects that runs fine on SQL Server 2000 can crawl on SQL Server 2005 (it can be like 10x slower!).
Make an experiment - re-write one query to eliminate the subselects and see how its performance changes.

Is there an alternative to the SQL Profiler for SQL Server 2000

I am trying to optimize some stored procedures on a SQL Server 2000 database and when I try to use SQL Profiler I get an error message "In order to run a trace against SQL Server you have to be a member of sysadmin fixed server role.". It seems that only members of the sysadmin role can run traces on the server (something that was fixed in SQL Server 2005) and there is no way in hell that I will be granted that server role (company policies)
What I'm doing now is inserting the current time minus the time the procedure started at various stages of the code but I find this very tedious
I was also thinking of replicating the database to a local installation of SQL Server but the stored procedure is using data from many different databases that i will spend a lot of time copying data locally
So I was wondering if there is some other way of profiling SQL code? (Third party tools, different practices, something else )
Your hands are kind of tied without profiler.
You can, however, start with tuning your existing queries using Query Analyzer or
any query tool and examining the execution plans. With QA, you can use
the Show Execution Plan option. From other tools you can use the
SET STATISTICS PROFILE ON / OFF
In query analyser:
SET STATISTICS TIME ON
SET STATISTICS IO ON
Run query and look in the messages tab.
It occurs to me this may require same privileges, but worth a try.
There is a workaround on SQL 2000 to obfuscate the Profiler connection dialogue box to limit the sysadmin connection to running traces only.
SQLTeam
Blog

Resources