I have a query that creates 400 transactions, each transaction running an update on a table. Running this query on the development server on premise takes around 100 ms, versus over 500 ms on Azure. I ran some statistics and RESERVED_MEMORY_ALLOCATION_EXT came up higher than dev, with a wait count of 3523254 and 0.9 seconds wait time. Any suggestions on how to troubleshoot further?
Related
I have two virtual servers in cloud: DB and Web. There is MS SQL Server Express 2012 instance on Db server. In Database in SQL Server there is the table 'Job' with relatively big records(10Kb per record).
When i run simple query from DB server, it takes less than one second to execute. But when i run it from Web server it takes more than one minute.
SELECT TOP 10000 * FROM Job
And it have direct relationship between retrived records count and time to execute. It takes about seven minutes to retrieve 100000 records from Web server.
In SQL Server Profiles it takes same CPU time to execute query from DB and Web server, but total time different.
The same effect is on any query which returns large amount of bytes.
I have another application servers in this cloud and they have that effect too.
But on development environment and other deploys all is ok. Query executes on Web server slightly slow than on DB Server.
I have checked network speed between Web and DB servers. Its stable and about 1Gb. Turning off Windows Firewall have no effect.
I have a WCF service on azure which perform multiple tasks.
Tasks:
1. download a file from a remote server (about 50MB)
2. preform bulk insert to a Azure Database (about 360,000 records) at once.
3. run 2 different SP (about 15 sec tops both)
The total process takes about 1 min on my local SQL server 2012 Database,
but when I try to run it from a cloud WCF (not Cloud Service) it takes more than the timeout connection can handle (4-30 min).
Still I don't understand why there is a significant difference of performance.
Cloud Resources? and if so how can I make it perform better same as if I ran it locally (as close as I can)?
Regards,
Avishai
We're getting spikes from time to time but can't find what causes it.
How to monitor the Azure SQL DTU usage?
How can I find what are the high DTU queries in live?
The following will show you a log of the 100 most recent DTU logs. As of now, a log entry is created every 15 seconds.
SELECT TOP 100 *
FROM sys.dm_db_resource_stats
ORDER BY end_time DESC
Please have a check on the below links which talks about Azure SQL Database Throughput Unit (DTU).
http://azure.microsoft.com/blog/2014/09/11/azure-sql-database-introduces-new-near-real-time-performance-metrics/
Azure SQL Database "DTU percentage" metric
http://msdn.microsoft.com/en-us/library/azure/dn741336.aspx
Regards,
Mekh.
Currently running SQL Server 2008 R2 SP1 on 64-Bit Windows Server 2008 R2 Enterprise on a Intel dual 8-core processor server with 128 GB RAM and 1TB internal SCSI drive.
Server has been running our Data Warehouse and Analysis Services packages since 2011. This server and SQL instance is not used for OLTP.
Suddenly and without warning, all of the jobs that call SSIS packages that build the data warehouse tables (using Stored Procedures) are failing with "Deadlock on communication buffer" errors. The SP that generates the error within the package is different each time the process is run.
However, the jobs will run fine if SQL Server Profiler is running to trace at the time that the jobs are initiated.
This initially occured on our Development server (same configuration) in June. Contact with Microsoft identified Disk I/O issues, and suggested setting MaxDOP = 8, which has mitigated the deadlock issue, but introduced an issue where the processes can take up to 3 times longer at random intervals.
This just occurred today on our Production server. MaxDOP is currently set to zero. There have been no changes to OS, SQL Server or the SSIS packages in the past month. The jobs ran fine overnight on September 5th, but failed with the errors overnight last night (September 6th) and continue to fail on any retry.
The length of time that any one job will run before failing is not consisent nor is there consistency between jobs. Jobs that take 2 minutes to run to completion previously will fail in seconds, where jobs that normally take 2 hours may run anywhere from 30 - 90 minutes before failing.
Have you considered changing the isoloation level of the database. This can help when parallel reads and writes are happening on the database.
Let's say I have a simple ASP.NET MVC web application and a (local) Sql Server. As an ORM, I am using Entity Framework 4.3.1
To figure out how long it takes on the ORM side, I've prepared a simple select query and printed out timestamps, like
...
using (var context = Entities())
{
(1) timestamp1
var list = context.Database.SqlQuery<Entity>("select * from entities").ToList();
(2) timestamp2
}
...
At the same time, I watched Sql Server Profiler to see the query start/end times.
The result is as follows (note that only milliseconds are shown since the query-processing time is less than 1sec)
timestamp1: 149 msec
query start time: 197 msec
query end time: 198 msec
timestamp2: 199 msec
Question) why so much time (48 msec, 197-149 msec) was taken before starting the query? is there any way to reduce this?
thanks!
ADO.NET EF needs to initialize itself, you'll find that future queries will execute without as much lag. Also don't forget that SQL Server Profiler only logs execution time for the query, not time associated with network transport and other overheads.
There might also be slowdowns associated with activating SQL Server - you say it's local on your computer, so it's possible that SQL Server's components were paged-out to disk. What is performance like when you run against a dedicated SQL Server box?