Slow performance when using fully qualified name in SELECT - sql-server

I'm using SQL Server 2008 R2 for this issue.
In one of my apps, I need to refer to a table from another database. So I do a query:
USE Db1
SELECT * FROM Db2.dbo.Table1
It takes ~2 seconds for the query to complete even for a table with just 300 records. The delay is consistent, I ran it in Management Studio and hit Execute and the result is the same. I did this for around 10 times with consistent results.
Now when I run the query but this time running it in the context of the actual database:
USE Db2
SELECT * FROM Table1
There's virtually no wait time when the same results are returned.
Now the weird part is, when I go back to my first query, the delay no longer happens! And this behavior is reproduced every time I restart SQL Server.
Has anyone encountered this behavior before? Do you have any ideas on what I could be doing wrong?

Finally figured this one out. The Auto Close property for the database being referenced in the SELECT was set to True. I set this to False and the delay during the SELECT calls disappeared.
So what was happening was it was always starting up the database for every SELECT statement! I checked the Event Viewer and true enough, a log of the database starting up is there for every call.
To set this to false, I used Management Studio, right click on the Database, then go to Properties. In the Properties window, select Options and under the Automatic group, the first item is Auto Close. Set this to False.
See the link below for more information on the Auto Close property. It is set to True by default. Set this to False, and you should not encounter this problem.
Auto_Close

There is no mystery here, and the fully qualified name plays no role at all.
The table data are cached in memory the first time you ask for them. Any subsequent calls will read the data from memory instead of reading them from disk. Additionally, SQL Server caches compiled execution plans and reuses them for new queries.
Each time you restart SQL Server you start with empty the memory buffers and execution plan caches so the first query you execute will be significantly slower.
In order to get meaningful results, you need to clear the buffer and execution plan cache using these commands:
DBCC FREEPROCCACHE will clean the execution plan cache, forcing new queries to be recompiled
DBCC DROPCLEANBUFFERS will clean the memory buffers, forcing SQL Server to reload data from disk

Related

SQL Server 2008 plan cache is almost always empty

In order to investigate query plan usage I'm trying to understand what kind of query plan is stored in the memory.
Using this query:
SELECT objtype AS 'Cached Object Type',
COUNT(*) AS 'Numberof Plans',
SUM(CAST(size_in_bytes AS BIGINT))/1048576 AS 'Plan Cache SIze (MB)',
AVG(usecounts) AS 'Avg Use Counts'
FROM sys.dm_exec_cached_plans
GROUP BY objtype
ORDER BY objtype
I got almost empty plan cache structure. .
There is 128Gb of RAM on the server and ~20% is free. SQL Server instance is not constrained by memory.
Yes basically I have Adhoc queries (not parameterized, not stored procedures).
But why SQL Server empties the query plan cache so frequent? What kind of issue do I have?
Finally, only instance restart solved my problem. Now plan cache looks more healthy.
If the server isn't under memory pressure then some other possibilities from the plan caching white paper are below.
Are any of these actions scheduled frequently? Do you have auto close enabled?
The following operations flush the entire plan cache, and therefore,
cause fresh compilations of batches that are submitted the first time
afterwards:
Detaching a database
Upgrading a database to SQL Server 2005
Upgrading a database to SQL Server 2008
Restoring a database
DBCC FREEPROCCACHE command
RECONFIGURE command
ALTER DATABASE ,,, MODIFY FILEGROUP command
Modifying a collation using ALTER DATABASE … COLLATE command
The following operations flush the plan cache entries that refer to a
particular database, and cause fresh compilations afterwards.
DBCC FLUSHPROCINDB command
ALTER DATABASE … MODIFY NAME = command
ALTER DATABASE … SET ONLINE command
ALTER DATABASE … SET OFFLINE command
ALTER DATABASE … SET EMERGENCY command
DROP DATABASE command
When a database auto-closes
When a view is created with CHECK OPTION, the plan cache entries of the database in which the view is created are flushed.
When DBCC CHECKDB is run, a replica of the specified database is created. As part of DBCC CHECKDB's execution, some queries against the
replica are executed, and their plans cached. At the end of DBCC
CHECKDB's execution, the replica is deleted and so are the query plans
of the queries posed on the replica.
The following sp_configure/reconfigure operations also clear the procedure cache:
access check cache bucket count
access check cache quota
clr enabled
cost threshold for parallelism
cross db ownership chaining
index create memory
max degree of parallelism
max server memory
max text repl size
max worker threads
min memory per query
min server memory
query governor cost limit
query wait
remote query timeout
user options
I had the same issue just about a week ago and also posted several questions. Even though I have not actually found the answer to the problem I 've got some insight on the process. And silly as it sounds SQL Server service restart helped but raised another problem - the recovery process continued for 4 hours. Seems like a pretty large transaction was in place...
empty-plan-cache-problem
Almost empty plan cache
Almost empty plan cache

SSRS Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding

I have a reporting solution with several reports. Up to now, I have been able to add a dataset based on a SPROC with no problems. However, when I try to add the lastest dataset, and use a SPROC for its query type, when I click on Refresh Fields I get the following error:
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
I have tested the database connection in Data Source properties>Edit>Test Connection, and it's working fine.
I have increased the timeout to 100 in the following areas:
The connection string for the datasource, which is - Connect
Timeout=100
Tools>Options>Database Tools>Query and View
Designers. Cancel long running query is set to 100.
Tools>Options>Database Tools>Table and Database Designers>Checked
'Override Connection String time-out value for table designer
updates. Transaction time-out after is set to 100
The SPROC runs fine in the SQL database. It takes about 55 seconds.
Any other ideas?
Thanks.
UPDATE: I now can't add any dataset with a SPROC. Even thought the SPROCs are all working fine in SQL!!!!!!
If you are using Report Builder, you can increase timeout also in your DataSet.
I have also face same problem for adding the newly added column in stored procedure.
From the following way overcome this issue.
Alter the stored procedure as comment all the query except that final select command.
Now that new column has been added, then un-comment that quires in sp.
The thing to remember with your report is that when it is ran, it will attempt to run ALL the datasets just to make sure the are runnable, and the data they are requesting can be returned. So by running the each proc seperately you are in fact not duplicating with SSRS is trying to do...and to be honest don't bother.
What you could try is running sp_who while the report is running, or even just manually go through the procedures to see what table they have in common. Since your proc takes 52 seconds to return its dataset I'm going to assume its doing some heavy lifting. Without the queries nobody will be able to tell what the exact problem is.
I suggest using NO LOCK to see if that resolves your issue. If it does then your procs are fighting for data and blocking each other...possibly in a endless loop. Using NO LOCK is NOT a fix. Read what it does and judge for yourself however.
My solution was to go to the Dataset Properties for the given problem dataset, paste the query in the Query field, click Refresh Fields, and click Ok.

SSRS sql query runs slow

I have a long time issue keep popping up every time.
I create ssrs report with some select query. when i try to run the report it takes around 20sec to render.
i've checked the sql profiler and indeed the query run more than 20 sec.
when i copy the query to the management studio, it runs in 0 sec.
as written in earlier posts i've tried the walk around of declaring parameters in the query and setting their value with the ssrs params. sometime it works, currently it doesn't...
any other walk around?
Configure your report to run from the cache.
Caching is a copy of the last executed report. It is not a persisted copy, it has a lifetime (like caching for 30 minutes). It is stored on the temp database. You can have only one "instance" per report (if you have parameters, you will have one per combination of parameter)
you can do that on the execution tab of the report on report manager
Make the sql statement into a stored procedure and use the WITH RECOMPILE option in the sp.
E.g.
CREATE PROCEDURE dbo.spname #ParamName varchar(30)
**WITH RECOMPILE**
AS
This will help counteract the "parameter sniffing" during the procedure execution and help improve performance.

Is it possible to set a timeout for a SQL query on Microsoft SQL Server?

I've got a scenario when sometimes a user selects the right parameters and makes a query which takes several minutes or more to execute. I cannot prevent him to select such a combination of parameters (it's quite legal), so I'd like to set a timeout on the query.
Note that I really want to stop the query execution itself and rollback any transactions, because otherwise it hogs up most of server resources. Add an impatient user who restarts the application and tries the combination again, and you've got a recipe for a disaster (read: SQL Server DoS).
Can this be done and how?
As far as I know, apart from setting the command or connection timeouts in the client, there is no way to change timeouts on a query by query basis in the server.
You can indeed change the default 600 seconds using sp_configure, but these are server scoped.
Humm!
did you try LOCK_TIMEOUT
Note down what it was orginally before running the query
set it for your query
after running your query set it back to original value
SET LOCK_TIMEOUT 1800;
SELECT ##LOCK_TIMEOUT AS [Lock Timeout];
I might suggest 2 things.
1)
If your query takes a lot of time because it´s using several tables that might involve locks, a quite fast solution is to run your queries with the "NoLock" hint.
Simply add Select * from YourTable WITH (NOLOCK) in all your table references an that will prevent your query to block for concurrent transactions.
2) if you want to be sure that all of your queries runs in (let´s say) less than 5 seconds, then you could add what #talha proposed, that worked sweet for me
Just add at the top of your execution
SET LOCK_TIMEOUT 5000; --5 seconds.
And that will cause that your query takes less than 5 or fail. Then you should catch the exception and rollback if needed.
Hope it helps.
In management studio you can set the timeout in seconds.
menu Tools => Options set the field and then Ok
It sounds like more of an architectual issue, and any timeout/disconnect you can do would be more or less a band-aid. This has to be solved on SQL server side, by the way of read-only replica, transaction log shipping (to give you a read-only server to connect to), replication and such. Basically you give the DMZ sql server that heavy read can go to without killing stuff. This is very common. A well-designed SQL system won't be taken down by DDoS - that'd be like a car that dies if you step on the gas.
That said, if you are at the liberty to change the code, you could guesstimate if the query is too heavy and you could either reject or return only X rows in your stored procedure. If you are mated to some reporting tool and such and can't control the SELECT it generates, you could point it to a view and then do the safety valve in the view.
Also, if up-to-the-minute freshness isn't critical and you could compromise on that, like monthly sales data, then compiling a physical table of complex joins by job to avoid complex joins might do the trick - that way everything would be sub-second per query.
It entirely depends on what you are doing, but there is always a solution. Sometimes it takes extra coding to optimize it, sometimes it takes extra money to get you the secondary read-only DB, sometimes it needs time and attention in index tuning.
So it entirely depends, but I'd start with "what can I compromise? what can I change?" and go from there.
You can set Execution time-out in seconds.
If you have just one query I don't know how to set timeout on T-SQL level.
However if you have a few queries (i.e. collecting data into temporary tables) inside stored procedure you can just control time of execution with GETDATE(), DATEDIFF() and a few INT variables storing time of execution of each part.
You can specify the connection timeout within the SQL connection string, when you connect to the database, like so:
"Data Source=localhost;Initial Catalog=database;Connect Timeout=15"
On the server level, use MSSQLMS to view the server properties, and on the Connections page you can specify the default query timeout.
I'm not quite sure that queries keep on running after the client connection has closed. Queries should not take that long either, MSSQL can handle large databases, I've worked with GB's of data on it before. Run a performance profile on the queries, prehaps some well-placed indexes could speed it up, or rewriting the query could too.
Update:
According to this list, SQL timeouts happen when waiting for attention acknowledgement from server:
Suppose you execute a command, then the command times out. When this happens the SqlClient driver sends a special 8 byte packet to the server called an attention packet. This tells the server to stop executing the current command. When we send the attention packet, we have to wait for the attention acknowledgement from the server and this can in theory take a long time and time out. You can also send this packet by calling SqlCommand.Cancel on an asynchronous SqlCommand object. This one is a special case where we use a 5 second timeout. In most cases you will never hit this one, the server is usually very responsive to attention packets because these are handled very low in the network layer.
So it seems that after the client connection times out, a signal is sent to the server to cancel the running query too.

A T-SQL query executes in 15s on sql 2005, but hangs in SSRS (no changes)?

When I execute a T-SQL query it executes in 15s on sql 2005.
SSRS was working fine until yesterday. I had to crash it after 30min.
I made no changes to anything in SSRS.
Any ideas? Where do I start looking?
Start your query in SSIS then look into the Activity Monitor of Management Studio. See if the query is currently blocked by any chance, and in that case, what it is blocked on.
Alternatively you can use sys.dm_exec_requests and check the same thing, w/o the user interface getting in the way. Look at the session executing the query from SSIS, check it's blocking_session_id, wait_type, wait_time and wait_resource columns. If you find that the query is blocked, the SSIS has no fault probably and something in your environment is blocking the query execution. If on the other hand the query is making progress (the wait_resource changes) then it just executes slowly and its time to check its execution plan.
Have you tried making the query a stored procedure to see if that helps? This way execution plans are cached.
Updated: You could also make the query a view to achieve the same affect.
Also, SQL Profiler can help you determine what is being executed. This will allow you to see if the SQL is the cause of the issue, or Reporting Services rendering the report (ie: not fetching the data)
There are a number of connection-specific things that can vastly change performance - for example the SET options that are active.
In particular, some of these can play havoc if you have a computed+persisted (and possibly indexed) column. If the settings are a match for how the column was created, it can use the stored value; otherwise, it has to recalculate it per row. This is especially expensive if the column is a promoted column from xml.
Does any of that apply?
Are you sure the problem is your query? There could be SQL Server problems. Don't forget about the ReportServer and ReportServerTempDB databases. Maybe they need some maintenance.
The first port of call for any performance problems like this is to get an execution plan. You can either get this by running an SQL Profiler Trace with the ShowPlan Xml event, or if this isn't possible (you probably shouldn't do this on loaded production servers) you can extract the cached execution plan that's being used from the DMVs.
Getting the plan from a trace is preferable however, as that plan will include statistics about how long the different nodes took to execute. (The trace wont cripple your server or anything, but it will have some performance impact)

Resources