Performance problems with SQL Server Management Studio - sql-server

I'm running Sql Server Management Studio 2008 on a decent machine. Even if it is the only thing open with no other connections to the database, anything that has to do with the Database Diagram or simple schema changes in a designer take up to 10 minutes to complete and SQL Management Studio is unresponsive during that time. The same SQL code takes less than a second. This entirely defeats the purpose of the designers and diagramers.
------------------
System Information
------------------
Operating System: Windows Vistaâ„¢ Ultimate (6.0, Build 6001) Service Pack 1 (6001.vistasp1_gdr.080917-1612)
Processor: Intel(R) Core(TM)2 Quad CPU Q6700 # 2.66GHz (4 CPUs), ~2.7GHz
Memory: 6142MB RAM
Please tell me this isn't a WOW64 problem; if it is, I love MS, but step up your 64-bit support in development tools.
Is there anything I can do to get the performance anywhere near acceptable?
Edit:
I've got version 10.0.1600.22 of SQL Server Management Studio installed. Is this not the latest release? I'm sure I installed it from an MSDN CD and I pretty much rely on Windows Update these days. Is there any place I can quickly see what the latest release version number is for tools like this?
Edit:
Every time I go to open a database diagram I get the message "This database does not have one or more of the support objects required to use database diagramming. Do you wish to create them?" I say yes every time. Is this part of the problem? Also, if I press the copy icon, I get the message "Current thread must be set to single thread apartment (STA) mode before OLE calls can be made." Database corruption?

I'm running in a similar environment and not having that problem.
As with any performance problem, you'll have to analyze it a bit - just saying "it takes 10 minutes" give no information on the reason it takes so long, so no information you can use to solve the problem.
Here are some tools to play around with. I'd have mentioned them originally, but "play around" is all I've learned to do with them. I'd recommend you try learning a little about them, which I have not done. http://technet.microsoft.com is a good source on performance issues.
Start with Task Manager, believe it or not. It's been enhanced in Vista and Server 2008, and now has a better Performance tab, and a Services tab. Be sure to click "Show processes from all users", or you'll miss nasty things done by services.
The bottom of the Performance tab has a "Resource Monitor" button. Click it, watch it, learn what it can do for you.
The Resource Monitor is actually part of a larger "Reliability and Performance Monitor" tool in Administrative Tools. Try it. It even includes the new version of perfmon, which will be more useful when you have a better idea what counters to look at.
I will also suggest the Process Explorer and Process Monitor tools from Sysinternals. See http://technet.microsoft.com/en-us/sysinternals/default.aspx.

Do your simple schema changes possibly mean that you're reordering the columns of a table?
In that case, what SQL Management Studio does behind the scenes is create a new table, move all the data from the old table to the newly created table, and then drop the old table.
Thus, if you reorder columns on a table with lots of data, lots of indices or both, you CAN incur a massive amount of "reorganization" work without really realizing it.
Marc

Can you try connecting your SQL Management Studio to a different instance of SQL Server or, better, an instance on a remote machine (and try to make similar changes)?
Are there any entries in the System or Application Event Logs (or SQL logs for that matter)? Have you tried uninstalling and reinstalling SQL Server on your machine? What version of SQL Server (database) are you running?
Lastly, can you open the Activity Monitor successfully? Right click on the server (machine name) - top of the three in the object explorer window - and click on 'Activity Monitor'.
Do you have problems with other software on your machine or only with SQL Server & Management Studio?

When you open SSMS it attempts to validate itself with Microsoft. You can speed this process by performing the second of the recommendations at the following link.
http://www.sql-server-performance.com/faq/sql_server_management_studio_load_time_p1.aspx
Also, are you using the registered servers feature? If so SSMS will attempt to validate all of these.

It seems as though it was a network configuration problem. Never trust a developer (myself) to setup a haphazard domain at his office.
I had my DNS server on my computer pointed to my ISP's (default because the wireless router we're using provided by the ISP doesn't allow me to override the DNS server to my own) instead of my DNS server here, so I have to remember to configure it manually on each computer, which I forgot for this particular computer.
I only discovered it when I tried to connect for the first time to a remote SQL Server instance form this PC. It was trying to resolve to an actual sub-domain of mycompany.com instead of my DNS server's authority of COMPUTERNAME.corp.mycompany.com
I can't say why this was an issue for the designers in SQL Server but not anything else, but my only hypothesis is that when I established a connection to my own computer locally using the computer name instead of "." or "localhost", SQL queries executed immediately, knowing it was local, but the designers still waited for a timeout from the external IP address before trying the local one.
Whatever the explanation is, changing my DNS server for my network card on the local machine to my DNS server's IP made it all work very quickly.

I had a similar issue with mine. Turned out to be some interference with the biometrics login service running on my laptop. Disabled the service and now it works.

Related

VB.NET and SQL Server Express application deployment

I have developed an application using VB.NET and used SQL SERVER Express as the database back end.
The application has 5 user profiles.(Each user profile provides different services).
Deployment reqiurements :
The application is to be deployed on a LAN with 10-20 machines.
Any user profile can be accessed from any machine.
Any changes to the database entries should be reflected on all machines.
I am confused about how I should achieve this deployment. According to my research :
1.The database should be deployed on one machine . This machine will acts as the database server .
My problem(s) :
I am familiar with accessing databases on local machine but how to access a remote database?.
Is the connection string the only thing that needs to be addressed or are there any other issues too?
Do I need to install SQL SERVER on all machines or only on the server machine ?
Do I have to deal with concurrency issues (multiple users accessing/modifying same data simultaneously) or is it handled by the database engine?
2.The application can be deployed in 2 ways :
i. Storing the executable on a shared network drive on the server.Providing shortcut on desktop of each machine.
ii. Storing the executable itself on each machine.
My Problem(s) :
How does approach 1 work ? (One instance of an executable running on multiple machines ? :s)
In approach 2 , will the changes in database entries be reflected on all machines appropriately?
In approach 2, if there are changes to the application , is there any method to update it on all machines ? ( Other than redeploying it on each machine )
Which approach is preferable?
Do I need to install the .NET framework all machines?
Will I have to make any other system changes ( firewall,security,permissions) ?
If given a choice to install the operating system on each machine ,which version of windows is preferable for such an application environment ?
This is my first time deploying a multi-user database application on a network.I'll be very grateful for any suggestions/advice,references,etc.
Question 1: You will need to create SQL Server 'roles' for each of your 'profiles'. A given user will be assigned one or more or those 'roles'. Each of your tables, views, stored procedures, and triggers will need to be assigned one or more roles. This is a messy business, this is why DBAs get paid lots of money to lounge around most of the time (I'm kidding, don't vote me down).
Question 2: If you 'remote in' to a server, you'll get the server screens, which are quite a bit duller than the workstation presentation. Read up on 'One Click', this gives you the ability to detect an updated application on a host, and automatically deploy the update to the user's machine. This gets rid of the rather messy business of running around to 20 machine installing upgrades every time you fix something.
As you have hands-on access to all the machines your task is comparatively simpler.
Install SQL Express on your chosen db server. You should disable the 'hide advanced options' in the installer; this will allow you to enable TCP/IP and the SQL Browser service; also you may want mixed-mode authentication - depends on your app and whether the network is domain or peer-to-peer. The connection string will need to be modified as you are aware; also the default configuration of Windows firewall on the server will block access to the db engine - you will need to open exceptions in the firewall for the browser service and SQL server itself. Open these as exceptions for the exes, not as port numbers etc. Alternatively, if you have a firewall between your server and the outside world, you may decide to just turn off the firewall on the server, at least on a temporary basis while you get it working.
No, you don't need to install any SQL Server components on the workstations.
Concurrency issues should be handled by your application. I don't want to be rude but if you are not aware of this maybe you are not yet ready for deploying your app to production. Exactly what needs to be done about concurrency depends on both the requirements of your application and the data access technology you are using. If your application will be used mostly to enter new records and then just read them later, you may get away without too much concurrency-handling code; it's the scenario where users are simultaneously editing existing records where the problems arise - but you need to have at least basic handling in place.
Re where to locate the client exe - either of your suggestions can work. Simplest is local installation on each machine using an .msi file; you can place a master copy of the msi on the server. You can do stuff with login scripts, group policies, etc, or indeed clickonce. To keep it simple at this stage I would just install from an .msi onto each machine - sounds like you have enough complexity do get your head around already.
One copy of the exe on the server can be handled in a more sophisticated manner by Terminal Server Citrix, etc.
Either way assuming your app works correctly, yes all changes will be made against the same db and visisble to all workstations.
Yes you will need .net framework on all machines - however, it may very well already be there. Different versions of Windows came with different versions of the Fx built-in and or updated via Windows Update; also of course it depends which ver you built your exe against.
Right I hope there is something helpful in that lot. Good luck.

DBA's say no to SQL Server DTC?

I am trying to get our DBA's to enable DTC on a cluster of SQL Server 2005. Unfortunately they keep refusing. Their argument that they would need to set up a dedicated host for DTC (Could take months!!) as it is not a matter of ticking a few boxes. Is this true? How intrusive is DTC on a shared environment such as a SQL farm. Do I have an argument against this?
Thanks
Had to tone down the original response your 'DBA' team deserve!
In response to your questions:
Dedicated server - Not at all. Everywhere I've worked with clusters, the DTC service is installed when the cluster is commissioned. Typically it sits in its own resource group or within the cluster group. If in its own group its usually sits on whichever server is hosting the cluster group.
Intrusive? - Absolutely not. It should be installed when the cluster is created, as per MS best practice.
Do you have an argument? - You most certainly do. The links below should cover the why and how for getting it installed:
MSDTC and SQL on a Cluster
Clustered SQL Server do's, dont's and basic warnings
DTC needs to be enabled and running on both sides of the connection. In my organization, it took some research to figure out which four boxes to check and then some hand-holding to get those boxes checked on all db servers, all app servers and most laptops. There's still a couple of hold-out developer laptops... but they're ok as long as they don't write. :)
You should have some driving scenario (such as an atomic multiple database write) to hit the DBA's over the head with. Give them some time to guess at alternatives... then let them know that DTC is the only hammer for this kind of nail.
I'm unsure of the implications of DTC on a SQL farm. I imagine the whole farm could get involved in the transaction if it involves enough data... which can't be a good thing.

How to speed up mssql_connect()

I'm working on a project where a PHP dialog system is communicating with a Microsoft SQL Server 2008 and I need more speed on the PHP side.
After profiling my PHP scripts, I discovered that a call to mssql_connect() needs about 200 milliseconds on that particular system. For some simple dialogs this is about 60% of the whole script runtime. So I could gain a huge performance boost by speeding up this call.
I already assured that only one single connection handle is produced for every request to my PHP scripts.
Is there a way to speed up the initial connection with SQL Server? Some restrictions apply, though:
I can't use PDO (there's a lot of legacy code here that won't work with it)
I don't have access to the SQL Server configuration, so I need a PHP-side solution
I can't upgrade to PHP 5.3.X, again because of crappy legacy code.
Hm. I don't know much about MS SQL, but optimizing that single call may be tough.
One thing that comes to mind is trying mssql_pconnect(), of course:
First, when connecting, the function would first try to find a (persistent) link that's already open with the same host, username and password. If one is found, an identifier for it will be returned instead of opening a new connection.
But you probably already have thought of that.
The second thing, you are not saying whether MS SQL is running on the same machine as the PHP part, but if it isn't, maybe there's a basic networking issue at hand? How fast is a classic ping between one host and the other? The same would go for a virtual machine that is not perfectly configured. 200 milliseconds really sounds very, very slow.
Then, in the User Contributed Notes to mssql_connect(), there is talk about a native PHP driver for MS SQL. I don't know anything about it, whether it will pertain the "old" syntax and whether it is usable in your situation, but it might be worth a look.
The User Contributed Notes are always worth a look, there are tidbits like this one:
Just in case it helps people here... We were being run ragged by extremely slow connections from IIS6 --> SQL Server 2000. Switching from CGI to ISAPI fixed it somewhat, but the initial connection still took along the lines of 10 seconds, and eventually the connections wouldn't work any more.
The solution was to add the database server IP address to the HOST file on the server, pointing it to the internal machine name. Looks like some kind of DNS lookup was the culprit.
Now connections and queries are flying, and the world is once again right.
We have been going through quite a bit of optimization between php 5.3, FreeTDS and mssql lately. Assuming that you have adequate server resources, we are finding that two changes made the database interaction much faster and much more reliable.
Using mssql_pconnect() instead of mssql_connect() eliminated an
intermittent "cannot connect to server" issue. I read a lot of posts
that indicated negative issues associated with persistent
connections but so far we haven't seen anything to suggest that it's
an issue. The php server seems to keep between 20 and 60 persistent
connections open to the db server depending upon load.
Using the IP address of the database server in the freetds.conf file
instead of the hostname also lent a speed increase.
The only thing i could think of is to use an ip adress instead of an hostname for the sql connect to spare the dns lookup. Maybe persistant connections are an option and a little bit faster.

Will uninstalling a named instance of SQL Server require a reboot or cause issues with existing instances?

I have a server with a default instance and 2 named instances of SQL Server 2005 standard installed. This is a mission critical production server that cannot be restarted during normal business hours.
Will uninstalling the two named instances of SQL Server 2005 require a reboot or put the server in a state that may cause issues with the default instance of SQL Server 2005 until it's rebooted?
This would probably get a better answer at serverfault.com.
I'm not sure how much it would help perf, if SQL isn't getting hit it doesn't do much. You could probably get away with uninstalling, but then again when in surgery bad things happen. I've never killed SQL server uninstalling an instance, but I have killed the client tools. I would take one of the following approaches:
a) First, backup and drop all the databases to reclaim the disk space. Then stop disable the services for the named instances. The binaries will still be there, but they aren't too large and will be sitting idle.
b) Better long-term plan if you can source the hardware is to setup a new de novo box and drop a clean SQL instance over there, then port the live server over. Really not too painful. Then repurpose old box as is fit.
Is there a critical reason why you need to uninstall named instances. Can you just ignore them?
EDIT: the answer is yes you can uninstall via add/remove programs
Rebooting doesn't occur
An article which might apply to your situation:
http://support.microsoft.com/?kbid=915854

MSSqlSM studio reacts really slowly

My Microsoft SQL Server Management Studio is slow. And by slow I mean >1minute to render a context menu-slow.
All other things work perfectly fine. The connection to the database itself is not slow (my app works just fine and context menus don't need connection to the DB anyway I guess)..
Anybody has any idea what I should check to solve this?
--EDIT--
Cpu is around 3%
Gigs of free ram
Only clicking right on a table in the object explorer, nothing else
the database is remote
It's the full version of SSMS
No system logging errors
Reinstall had no effect
UPDATE
I installed Toad for SQL and everything works super smooth there. Actually, I find it way more productive then MSSql ever was for me. It's not really an answer to my question, but it certainly a solution.
Have you installed any add-ins/plugins? Those might be the source of the problem.
Does this behavior occur for all SQL Servers you connect to, or just a few in particular? If it's only certain ones, it may be a really slow SQL Server - to get any context menus or list databases, it has to execute some queries against the remote servers to collect information, and if they're less than responsive, SSMS will be too.

Resources