Could someone please explain why the EXACT same call to run a stored procedure takes about 25 seconds to complete when run from my local SQL Server Management Studio but only takes 5 seconds (this is the time I'd expect it to take) when run from a query window in the "Manage" facility, inside the Azure portal? It's completely consistent no matter how many times I do it!
It's also running slow from our cloud application, which makes me think there's some kind of difference between "internal" and "external" access to the DB server.
Thanks.
The solution was to create a new "Business" instance and move the database there. Performance returned to normal without any changes to the DB or associated app. Investigation showed that the S0/1/2 instances were all SLOWER than the "Business" instances (that are about to be retired by MS). We will be consulting them this week. It's probable that we need to pay for their Premium service to sustain the performance we get from the old Web/Business instances. This still does not account for WHY the original database slowed down, nor why performance would differ when accessed from the Azure console vs SQLIDE/Application.
Related
Running SQL Server 2014 Express on our domain. We use Windows Authentication to log on. All queries are performed in stored procedures.
Now, the system runs fine for all our users - except one. When he logs on (using our software), all queries take around 10 times longer (e.g. 30 ms instead of 2 ms). The queries are identical, the database is the same, the network speed is the same, the operative system is the same, the SQL Server drivers are the same, connection pooling is the same, DNS is the same. Changing computer does not help. The problem seems to be linked to the account being used.
What on Earth may be the cause for this huge performance hit?
Please advise!
I would try rebuilding the SP (by running an ALTER statement that duplicates its existing structure) to force SQL Server to recompile. I don't know every way SQL Server caches things but it can definitely create distinct execution plans for different types of connections so I wouldn't be surprised if your slow user is running a version with an inefficient execution plan.
http://www.sommarskog.se/query-plan-mysteries.html
OK… I’ve been tasked to figure out why an intranet site is running slow for a small to medium sized company (less than 200 people). After three days of looking on the web. I’ve decided to post what I’m looking at. Here is what I know:
Server: HP DL380 Gen9 (new)
OS: MS Server 2012 – running hyper-v
RAM: 32GB
Server 2012 was built to run at most 2 to 3 VMs at most (only running one VM at the moment)
16GB of RAM dedicated for the VHD (not dynamic memory)
Volume was created to house the VHD
The volume has a fixed 400GB VHD inside it.
Inside that VHD is server 2008r2 running SQL 2008r2 and hosting an iis7 intranet.
Here is what’s happening:
A page in the intranet is set to run a couple of stored procedures that do some checking against data in other tables as well as insert data (some sort of attendance db) after employee data is entered. The code looks like it creates and drops approximately 5 tables in the process of crunching the data. The page takes about 1min50secs to run on the newer server. I was able to get hold of the old server & run a speed test: 14 seconds.
I’m at a loss… a lot of sites say alter the code. However it was running quick before.
Old server is a 32bit 2003 server running SQL2000… new is obviously 64bit.
Any ideas?
You should find out where the slowness is coming from.
The bottleneck could be in SQL-Server, in IIS, in the code, on the network?
Find the SQL statements that are executed and run them directly in SQL server.
Run the code outside of IIS web pages
Run the code from a different server
Solved my own issue... just took a while for me to get back to this. Hopefully this will help others.
Turned on SQL Activity Monitor under tools\options => at startup => Open Object Explorer and Activity Monitor.
Opened Recent Expensive Queries. Right clicked on the top queries and selected Show Execution Plan. This showed a missing index for the db. Added index by clicking the plan info at the top. Added the index.
Hope this helps!
I'm looking for a little advice.
I have some SQL Server tables I need to move to local Access databases for some local production tasks - once per "job" setup, w/400 jobs this qtr, across a dozen users...
A little background:
I am currently using a DSN-less approach to avoid distribution issues
I can create temporary LINKS to the remote tables and run "make table" queries to populate the local tables, then drop the remote tables. Works as expected.
Performance here in US is decent - 10-15 seconds for ~40K records. Our India teams are seeing >5-10 minutes for the same datasets. Their internet connection is decent, not great and a variable I cannot control.
I am wondering if MS Access is adding some overhead here than can be avoided by a more direct approach: i.e., letting the server do all/most of the heavy lifting vs Access?
I've tinkered with various combinations, with no clear improvement or success:
Parameterized stored procedures from Access
SQL Passthru queries from Access
ADO vs DAO
Any suggestions, or an overall approach to suggest? How about moving data as XML?
Note: I have Access 7, 10, 13 users.
Thanks!
It's not entirely clear but if the MSAccess database performing the dump is local and the SQL Server database is remote, across the internet, you are bound to bump into the physical limitations of the connection.
ODBC drivers are not meant to be used for data access beyond a LAN, there is too much latency.
When Access queries data, is doesn't open a stream, it fetches blocks of it, wait for the data wot be downloaded, then request another batch. This is OK on a LAN but quickly degrades over long distances, especially when you consider that communication between the US and India has probably around 200ms latency and you can't do much about it as it adds up very quickly if the communication protocol is chatty, all this on top of the connection's bandwidth that is very likely way below what you would get on a LAN.
The better solution would be to perform the dump locally and then transmit the resulting Access file after it has been compacted and maybe zipped (using 7z for instance for better compression). This would most likely result in very small files that would be easy to move around in a few seconds.
The process could easily be automated. The easiest is maybe to automatically perform this dump every day and making it available on an FTP server or an internal website ready for download.
You can also make it available on demand, maybe trough an app running on a server and made available through RemoteApp using RDP services on a Windows 2008 server or simply though a website, or a shell.
You could also have a simple windows service on your SQL Server that listens to requests for a remote client installed on the local machines everywhere, that would process the dump and sent it to the client which would then unpack it and replace the previously downloaded database.
Plenty of solutions for this, even though they would probably require some amount of work to automate reliably.
One final note: if you automate the data dump from SQL Server to Access, avoid using Access in an automated way. It's hard to debug and quite easy to break. Use an export tool instead that doesn't rely on having Access installed.
Renaud and all, thanks for taking time to provide your responses. As you note, performance across the internet is the bottleneck. The fetching of blocks (vs a continguous DL) of data is exactly what I was hoping to avoid via an alternate approach.
Or workflow is evolving to better leverage both sides of the clock where User1 in US completes their day's efforts in the local DB and then sends JUST their updates back to the server (based on timestamps). User2 in India, also has a local copy of the same DB, grabs just the updated records off the server at the start of his day. So, pretty efficient for day-to-day stuff.
The primary issue is the initial DL of the local DB tables from the server (huge multi-year DB) for the current "job" - should happen just once at the start of the effort (~1 wk long process) This is the piece that takes 5-10 minutes for India to accomplish.
We currently do move the DB back and forth via FTP - DAILY. It is used as a SINGLE shared DB and is a bit LARGE due to temp tables. I was hoping my new timestamped-based push-pull of just the changes daily would have been an overall plus. Seems to be, but the initial DL hurdle remains.
I've written a .Net application which has an SQL Server 2008 R2 database with relatively small number of tables, but in some tables there might be some 100,000,000 records! For improving performance of SELECTs, I've created necessary indexes and it works well. But, as everyone knows, indexes need to be rebuilt when they are fragmented.
We have installed an SQL Server 2008 R2 Express on one of customer PCs plus my Winforms application. Three more PCs connect to this database over regular LAN, and everything seems fine.
Now, the problem is that, I want to rebuild indexes, for example every time a user starts using my program on ANY of the machines. Well, I can execute several ALTER INDEXes, but as stated in MS docs, OFFLINE indexing will lock the tables for period of indexing. Which means other users will lose access to tables when a user starts the program! I know there is an ONLINE option, but it doesn't work in Express edition of SQL Server.
In other environments with a real server running all the time, I would create an Agent Job which rebuilt indexes over night.
How can I solve this problem?
Without a normal 24/7 server running, it's difficult to do such maintenance automatically without disturbing users. I don't think putting that job at the application startup is a good idea, as it can really start many times together without a real reason, and also slows down startup significantly if tables are big, in addition to keep everyone else out as you say.
I would opt for 2 choices:
Setup a job on the "server" to do the rebuild on either SQL Server startup or computer startup. It will slow down the initialization of that PC when the user first power it on, but once done, it should work OK, and most likely with similar results to the nightly job.
Add an option in the application to launch the reindexing job manually when the user wants to do it, warning that it will take some time and during the process anyone else cannot use it. While this provides maximum flexibility, it relies on the user doing so when they start noting delays.
I would like to know which is the best way to make a copy and keep the copies synchronized of a on premises SQL Server 2008 (not R2) database to SQL Azure.
Think of the SQL Azure as a failover kind of structure...
Notes:
The database runs fine in SQL Azure
I have already figured out how to get the rest of the app running on Azure
Please consider suggestions of the type "Upgrade to SQL Server 2012 because of X" if the gain (reliability, efficiency, time to replicate, etc...) are worth it
I`m looking for instant replication (as fast as possible)
Yes it will have to sync back eventually. If the on-premises deploy crash and the cloud get activated and changed, sync back will be necessary, but i think it does not need to be automatic... of it is, better!
The Database consist of 900+ tables (legacy system)
http://www.windowsazure.com/en-us/manage/services/sql-databases/getting-started-w-sql-data-sync/
http://msdn.microsoft.com/en-us/library/hh456371.aspx
I think the best bet is to use SQL Data Sync, it should give you bidirectional and we use it currently to sync data around the world in terms of datacenters and one local on premise database. It will only give you 5 mins sync timing but this will probably do, otherwise the next best options is to use SQL Server VMs and do the old fashion way. But with SQL Azure Data Sync we have found to be reasonable reliable and been running it for a good six months syncing across 4 database in four data centres in Azure.
Some problems though with it,
It uses Triggers.
It will obivously add load and connections to your current SQL Database.
The new control panel in Azure is a nightmare for it, so I would use the old panel for the moment.
It is in preview last time I looked, so it might not be 100% suitable
for you.
I would imagine there is some better third party solutions out there but off the shelf and in Azure SQL Data sync is well worth a look for the situation you a describing.