Third Party Tools for Monitoring SQL Server Performance - sql-server

I'm in a situation where I came into a new job and I have to support several legacy systems. The original developer is no longer around. These legacy systems are really hammering away at our SQL Server and killing performance. I know that there are a lot of things that can be done in the code, but rewriting code is really my last resort.
What I'm looking for is some sort of tool that will monitor the queries coming into the server and give recommendations on indexing solutions. I know I can use the SQL Server Profiler but I'm looking for something a little more user friendly and something that can help me make the indexing decisions.
I know I didn't explain it very well, but I'm sure this is a common request. I'd like to make informed decisions on what to index and avoid "shooting from the hip" and indexing everything in sight. Thanks for any recommendations!

You don't need a third party tool for this.
Assuming SQL Server 2005+ as long as you can use SQL Profiler (actually SQL Trace - Don't use the Profiler GUI for this to reduce tracing overhead as much as possible) to collect a representative workload you can use the Database Tuning Advisor to automate analysis of the workload and make indexing recommendations.
You can also use the Missing Index DMVs for a quick overview of areas to investigate but the DTA will do more holistic analysis and take into account possible adverse effects of indexes on data modification statements.

+1 for Martin's answer, but since you asked about 3rd party tools, I'll mention one of my favorites (and no, I don't work for the company). Ignite for SQL Server does an excellent job of analyzing server activity in terms of wait time analysis. It won't make recommendations for you, but it will quickly identify the worst performing queries where you need to focus your effort.

SQL Server 2005+ has a lot of DMV's (Dynamic Management views) that you can query to get server info, as well as the Profiler / SQL Trace tool.
We administer several large database servers.
Idera is a good tool to manage multiple database servers easily.
I think you'd make a much better DBA if you learn more about the inbuilt functionality of SQL server.
Have a browse of
http://msdn.microsoft.com/en-us/library/ms188754.aspx
to find out more about DMV's and functions.
Another common issue with performance could be your indexes.
Theres a great tutorial that combines the DMV's with improving indexes here:
http://searchsqlserver.techtarget.com/tip/Using-dynamic-management-views-to-improve-SQL-Server-index-effectiveness
Idera is really worth checking out though as a good starting point. Combined with DMV's & SQL trace there shouldn't be much you won't be able to fix.
Idera just takes most of the leg work out of doing things.
http://www.idera.com/Content/Home.aspx
Idera: SQL Diagnostic Manager

Related

How to slow down all SQL Queries?

For testing purposes, is there a way to slow down all SQL queries? There are ways to slow down specific queries but is there a way to slow down ALL queries?
Ideally it would be nice to have a simple way to say, all queries by [N] milliseconds.
Sloppier ways might include...
- Lower SQL Server's available memory
- Loop through other SQL statements to have an artificial load
- Create/run a seperate CPU/IO/Memory heavy process on the SQL server
However... these seem very rough and not very elegant all. Is there something more exacting?
There's actually a tool which was created for this purpose, aptly name, SQL Query Stress. It's available on Github.
Github link: https://github.com/ErikEJ/SqlQueryStress
I've been introduced to it by Brent Ozar and company. Please see the link to the article in which he introduces it and shows some of its use.
https://www.brentozar.com/archive/2015/05/how-to-fake-load-tests-with-sqlquerystress/

How to get better insights on slow-running SQL queries

In certain periods, we are seeing slow queries on our SQL Server 2012 instance (on an Azure VM).
When analysing these queries (execution plan), there is nothing obvious wrong with the plan, and running them in management studio yields instant results.
My guess here is that there is lock contention issues.
My question is: how can i get better insights into why these queries are running slow?
Is there a third-party tool that's great for this job? (e.g Redgate SQL monitor)
From experience, the built-in SQL queries/tools that help diagnose these sorts of issues aren't very easy to use (manual queries, deadlock graphs, etc).
Can anyone point me in the right direction of either a great tool i can use, or a simple way within SQL Server to find out why they are running slow.
Thanks!
The best "free" tool that I've found is Blitz tools provide by Brent Ozar (part of first responder kit) found here.
With these tools, you should be able to spot a problem very quickly, and sp_blitzindex will give all kinds of info on indexes. You can automate/notify as you see fit.

Need design tools or Performance Mwasure tools for SQL Server 2014

As per the requirement I need to know is there any design tool available for SQL Server 2014 .
Also I want to more Explore the performance Tuning and optimization for written code.
If you're looking for free tools then Windows 'Performance Monitor' is pretty helpful. You can add the specific counters that you want to monitor and keep track of them. There is a simple 'how to' guide here that will show you how you can store them a SQL Server instance then you can manipulate it how you want.
For performance tuning queries, it comes with practice. Find some poor queries and then look at the execution plan and see what its doing. There are many free videos that you can watch that will help you understand what it shows.

How to synchronize databases in different servers in SQL Server 2008?

I have 2 databases that have the same structure, one on a local machine and one on the company's server. Every determined amount of time, the data from the local DB should be synchronized to the server DB.
I have a general idea on how to do this - create a script that somehow "merges" the information that is not on the server DB, then make this script run as a scheduled job for the server. However, my problem lies in the fact that I am not very well experienced with this.
Does SQL Server Management Studio provide an easy way to do this (some kind of wizard) and generates this kind of script? Is this something I'll have to build from scratch?
I've done some basic google searches and came across the term 'Replication' but I don't fully understand it. I would rather hear some input from people who have actually done this or who are good with explaining this kind of stuff.
Thanks.
Replication sounds like a good option for this, but there would be some overhead (not technical overhead, but the knowledge need to support it).
Another SQL Server option is SSIS. SSIS provides graphical tools to design what you're trying to do. The SSIS package can also run SQL statements, if appropriate. An SSIS package can be started, and therefore scheduled, from a SQL Server job.
You should consider the complexity of the synchronization rules when choosing your solution. For example, would it be difficult to resolve conflicts, such as a duplicate key, when merging the data. A SQL script may be easy to create if the rules are simple. But, complex conflict rules may be more difficult to implement in a script (or, replication).
SQL Server Management Studio unfortunately doesn't offer much in this way.
You should have a serious look at some of the excellent commercial offerings out there:
Red Gate Software's SQL Compare and SQL Data Compare - excellent tools, highly recommended! You can even compare a live database against a backup from another database and synchronize the data - pretty nifty!
ApexSQL's SQL Diff and SQL Data Diff
They all cost money - but if you're serious about it, and you use them in your daily routine, they're paid for in no time at all - well worth every dime.
The only "free" option you have in SQL Server 2008 would be to create a link between the two servers and then use something like the MERGE statement (new in SQL Server 2008) to transfer the data. That doesn't work for structural changes, and it's limited only to having a live connection between the two servers.
You should definitely read up on transactional replication. It sounds like a good fit for the situation you've described. Here are a few links to get you started.
How Transactional Replication
Works
How do I... Configure
transactional replication between two
SQL Server 2005 systems?
Performance Tuning SQL Server
Transactional Replication
What you want is Peer-to-Peer Transactional Replication, which allows data to be updated at both databases yet keep them in sync through a contiguous merge of changes. This is the closes match to what you want, but is a fairly costly option (requires Enterprise Edition on both sites). Another option is Bidirectional Transactional Replication, but since this requires also two EE licenses, I say that peer-to-peer is easier to deploy for the same money.
A more budget friendly option is Updatable Subscriptions for Transactional Replication, but updatable subscriptions are being deprecated and you'd bet your money on a loosing horse.
Another option is to use Merge Replication. And finally, for the cases when the 'local' database is quite mobile there is Sync Framework.
Note that all these options require some configuration and cooperation from the Company's server DB.
There are some excellent third party tools out there. For me, xSQL Data Compare has always done the trick. And because the comparisons are highly modifiable it is suitable for almost every data compare or data-synchronization scenario. Hope this helps!

Can SQL server 2008 handle 300 transactions a second?

In my current project, the DB is SQL 2005 and the load is around 35 transactions/second. The client is expecting more business and are planning for 300 transactions/second. Currently even with good infrastructure, DB is having performance issues. A typical transaction will have at least one update/insert and a couple of selects.
Have you guys worked on any systems that handled more than 300 txn/s running on SQL 2005 or 2008, if so what kind of infrastructure did you use how complex were the transactions? Please share your experience. Someone has already suggested using Teradata and I want to know if this is really needed or not. Not my job exactly, but curious about how much SQL can handle.
Its impossible to tell without performance testing - it depends too much on your environment (the data in your tables, your hardware, the queries being run).
According to tcp.org its possible for SQL Server 2005 to get 1,379 transactions per second. Here is a link to a system that's done it. (There are SQL Server based systems on that site that have far more transactions... the one I linked was just the first I one I looked at).
Of course, as Kragen mentioned, whether you can achieve these results is impossible for anyone here to say.
Infrastructure needs for high performance SQL Servers may be very differnt than your current structure.
But if you are currently having issues, it is very possible the main part of your problem is in bad database design and bad query design. There are many way to write poorly performing queries. In a high transaction system, you can't afford any of them. No select *, no cursors, no correlated subqueries, no badly performing functions, no where clauses that aren't sargeable and on and on.
The very first thing I'd suggest is to get yourself several books on SQl Server peroformance tuning and read them. Then you will know where your system problems are likely to be and how to actually determine that.
An interesting article:
http://sqlblog.com/blogs/paul_nielsen/archive/2007/12/12/10-lessons-from-35k-tps.aspx

Resources