SQL Server Query Execution flow - sql-server

I am looking for information about how SQL Server actually handles query execution in finer details (like what data is kept in buffer/memory and how does it decide to get fresh data even if there is an update change in only one column of a table involved in a query etc)
If anyone knows sources please let me know?
We have an web application using sql server 2000, it has heavy frequent reads almost 70% of the tables(dashboard) every 30 seconds. and at the same time lot of writes are happening.
Please let me know any tips for optimization of above scenario?

No one is going to be able to give you an answer on how to solve your optimization problem. This requires access to your database server. To solve your problems you need to understand roughly how the database works, and for this you need to do some reading.
On-line resources are OK, however the following three books are priceless. The first two books have very detailed information about how SQL Server works. The last ones is a guide of how to write queries but with a discussion on how the engine views queries as well.
Kalen Daleney: Sql Server 2008 Internals
Sql Server 2005: Practical Troubleshooting:
Ben Gan et al: Inside SQL Server 2008: T-SQL Querying

It would take a lot of posts to answer the internal's type questions, I suggest you start reading some of the white papers / blogs and some books.
For 2000 Ken Henderson's SQL Server Architecture will give you deep internal details, for the more recent editions of SQL, he did a 2005 practical trouble shooting which isn't bad, and Kalen Delaney's SQL 2008 Internals book is very good.

You should examine the execution plan.
Press Ctrl-L in the SSMS or issue SET SHOWPLAN_TEXT ON before executing the query.
This will give you detailed information of what indexes are used, which join algorithms applied etc.
You may also want to see the statistics:
SET STATISTICS TIME ON
SET STATISTICS IO ON
, which will give you the information on how many reads were done from actual tables, cache etc., and how much time (actual and CPU time) did every query take.

About SQL Server 2008 buffer management: http://msdn.microsoft.com/en-ca/library/aa337525.aspx
Then you can browse with the left sidebar for information on other subjects.

As far as external sources go, MSDN has a comprehensive resource on optimising your SQL Server 2000 installation, particularly the Patterns & Practices paper on performance and scalability.
If you know where to start looking, take a bottom-up approach with examining SQL profiles and query plans as has been mentioned. Otherwise, start from the top down and learn about the costs involved in recompiling queries, how to index effectively, and how the query optimiser uses statistics.

Related

Third Party Tools for Monitoring SQL Server Performance

I'm in a situation where I came into a new job and I have to support several legacy systems. The original developer is no longer around. These legacy systems are really hammering away at our SQL Server and killing performance. I know that there are a lot of things that can be done in the code, but rewriting code is really my last resort.
What I'm looking for is some sort of tool that will monitor the queries coming into the server and give recommendations on indexing solutions. I know I can use the SQL Server Profiler but I'm looking for something a little more user friendly and something that can help me make the indexing decisions.
I know I didn't explain it very well, but I'm sure this is a common request. I'd like to make informed decisions on what to index and avoid "shooting from the hip" and indexing everything in sight. Thanks for any recommendations!
You don't need a third party tool for this.
Assuming SQL Server 2005+ as long as you can use SQL Profiler (actually SQL Trace - Don't use the Profiler GUI for this to reduce tracing overhead as much as possible) to collect a representative workload you can use the Database Tuning Advisor to automate analysis of the workload and make indexing recommendations.
You can also use the Missing Index DMVs for a quick overview of areas to investigate but the DTA will do more holistic analysis and take into account possible adverse effects of indexes on data modification statements.
+1 for Martin's answer, but since you asked about 3rd party tools, I'll mention one of my favorites (and no, I don't work for the company). Ignite for SQL Server does an excellent job of analyzing server activity in terms of wait time analysis. It won't make recommendations for you, but it will quickly identify the worst performing queries where you need to focus your effort.
SQL Server 2005+ has a lot of DMV's (Dynamic Management views) that you can query to get server info, as well as the Profiler / SQL Trace tool.
We administer several large database servers.
Idera is a good tool to manage multiple database servers easily.
I think you'd make a much better DBA if you learn more about the inbuilt functionality of SQL server.
Have a browse of
http://msdn.microsoft.com/en-us/library/ms188754.aspx
to find out more about DMV's and functions.
Another common issue with performance could be your indexes.
Theres a great tutorial that combines the DMV's with improving indexes here:
http://searchsqlserver.techtarget.com/tip/Using-dynamic-management-views-to-improve-SQL-Server-index-effectiveness
Idera is really worth checking out though as a good starting point. Combined with DMV's & SQL trace there shouldn't be much you won't be able to fix.
Idera just takes most of the leg work out of doing things.
http://www.idera.com/Content/Home.aspx
Idera: SQL Diagnostic Manager

Can SQL server 2008 handle 300 transactions a second?

In my current project, the DB is SQL 2005 and the load is around 35 transactions/second. The client is expecting more business and are planning for 300 transactions/second. Currently even with good infrastructure, DB is having performance issues. A typical transaction will have at least one update/insert and a couple of selects.
Have you guys worked on any systems that handled more than 300 txn/s running on SQL 2005 or 2008, if so what kind of infrastructure did you use how complex were the transactions? Please share your experience. Someone has already suggested using Teradata and I want to know if this is really needed or not. Not my job exactly, but curious about how much SQL can handle.
Its impossible to tell without performance testing - it depends too much on your environment (the data in your tables, your hardware, the queries being run).
According to tcp.org its possible for SQL Server 2005 to get 1,379 transactions per second. Here is a link to a system that's done it. (There are SQL Server based systems on that site that have far more transactions... the one I linked was just the first I one I looked at).
Of course, as Kragen mentioned, whether you can achieve these results is impossible for anyone here to say.
Infrastructure needs for high performance SQL Servers may be very differnt than your current structure.
But if you are currently having issues, it is very possible the main part of your problem is in bad database design and bad query design. There are many way to write poorly performing queries. In a high transaction system, you can't afford any of them. No select *, no cursors, no correlated subqueries, no badly performing functions, no where clauses that aren't sargeable and on and on.
The very first thing I'd suggest is to get yourself several books on SQl Server peroformance tuning and read them. Then you will know where your system problems are likely to be and how to actually determine that.
An interesting article:
http://sqlblog.com/blogs/paul_nielsen/archive/2007/12/12/10-lessons-from-35k-tps.aspx

Can SQL Server 2000 be made to populate a fulltext catalogue without blocking the tables it is reading?

I have a database server on SQL Server 2000 (yes I know...) with fulltext catalogues on some of its tables. I'm currently doing a full population overnight in quiet time, and I'd like to be able to update the catalogues during the day so that new data can be considered in searches.
The problem I've noticed is that when an incremental population runs there is a considerable amount of blocking, caused by the population process. The other transactions on this database are using "read uncommitted", or dirty reads, to minimize delays (I don't especially care about up-to-the-second accurate data) so I'm not exactly sure why the population, which itself is only reading data, blocks them.
Any clues, hints?
Short story: no, and the situation isn't much better until recent updates of SQL Server 2008. The RTM version of 2008 had these same issues, as we documented here:
http://www.brentozar.com/archive/2008/11/stackoverflows-sql-2008-fts-issue-solved/
The workaround is to use the fastest storage subsystems that make sense for your budget and your workloads. The full text catalogs need to be on separate arrays from your data and logs, and that way they can finish population faster.
You also mentioned that you're surprised that reading causes locks. We've got articles on SQLServerPedia explaining SQL Server's locking process, like this one:
http://sqlserverpedia.com/wiki/SQL_Server_Locking_Mechanism
If you want more specific answers, watch your server during the population. Run an sp_who2, look at which queries are being blocked, and run a DBCC INPUTBUFFER(spid) command to find out what their T-SQL is. That way you can see exactly what types of queries are causing it. If you're sure it's using read uncommitted, upload a copy of your query execution plan, and we can help interpret it to find out what's going on.

SQL commands to get performance statistics

Are there SQL commands that I could use to extract performance monitoring data from MS SQL 2005, such as:
transactions per second
page reads/writes
connections (##CONNECTIONS gives the total, but what about current)
physical reads
locks and blocks
other counters that might be interesting?
You want to look at Dynamic Management VIews (DMVs), introduced with SQL 2005.
This is a really great document from MS that gives you an overview as to how to use DMVs troubleshoot performance issues:
http://download.microsoft.com/download/1/3/4/134644fd-05ad-4ee8-8b5a-0aed1c18a31e/TShootPerfProbs.doc
The best way of seeing what's going on under the hood in SqlServer is to use the Performance Monitor built into windows, click Admin Tools -> Performance. If you haven't used it before the trick is to start it, then click the + icon at the centre top of the window, a dialog opens with 100s of different measures that you can then chart,.watch, or log.
SQL Server has loads of counters that you check out, what all the data means is of course a different question. This solution doesn't integrate with TSQL or Management Studio, but it is the best way of finding out what's going on.
A great place to learn how to performance tune SQL Server is Brent Ozar's website.
It includes details of how to use Performance Monitor, DMV's and how to data mine and interpret the results.
http://www.brentozar.com/sql-server-performance-tuning/

Find out sql server hardware or speed test

I use an sql server regularly and have recently been getting frustrated by the performance. It would be difficult for me to get direct access to find out the hardware so:
Is there a direct way in management studio to assess performance or find out the exact hardware.
Alternatively does someone have a set of test sql procedures I could try and ideally compare to other results to get an idea of it's performance.
So far I have setup a few quick queries on my local machines sql express server just as test these seem to run quicker than the sql server on the network which is meant to be high performance although no one knows when it was last upgraded I have a feeling it hasn't been for 6 or 7 years. Obviously these test don't account for the possibility of others querying at the same time or network transfers of results... Hopefully someone has a better solution.
You can't just ask your server guys? Seems like there's a fair bit of mistrust if you can't get hardware metrics. Count of CPUs, total memory, etc.
If there's that amount of mistrust, even if you found the answer from the database server, rectifying it would be impossible. If you can't get the current parameters, how could you get a change of hardware passed the server guys?
Start building rapport. The best line in the world to get someone on your side is, "I'm in trouble and I need your help..." You've elevated them and subjugated yourself, you've put them in a position to save you. You'd be amazed at how much you can get out of people that way.
As far as standard queries. You could look at TPC queries.
IF you are on 2005:
SELECT * FROM sys.dm_os_performance_counters
That will give you some sql only stats. You will not find much info about the machine without at least terminal access. In the sql startup log you can see some info on processors as well.
You also might try updating your references in your server. I had an issue a while back that 1 query returned in 100ms and an identical query in 5+ minutes and the only difference between the 2 was a Capital letter in the table name in my query (whih obviously shouldn't matter).
After some searching and SO-Questioning, I found that I needed to update my statistics. Could it be something like this is needed for your database / SQL Server too?
This sort of thing can be very political, especially in a firm with an endemic CYA culture (which describes most financial services companies). If there's no reasonable
expectation of a good working relationship with the production staff, A few approaches are:
Look at the query plans of the
queries. Check that they are
sensible (using indexes when they
should etc.)
Make it formal. Ask their manager
to get the specifications of the
machine, the disk layout and server
configuration and the last time
statistics were updated on all
tables and indexes. Make it clear
that the machine appears to be
under-performing.
If the statistics are out of date,
get them updated.
and one more
SELECT * FROM sys.dm_os_sys_info

Resources