SQL Server slower than data transfer - sql-server

I have two Windwos Server 2012 connected with 1Gbps switch. They are in blade and sharing same storage.
When I copy files between them I am geting the speed of 982Mbps. So that is great ;)
Also I have two SQL Server 2012 installed on those machines and those servers are linked to each other, with TCP, (not named pipes).
And connection between them is, slow. I have created one large table ~ 2GB without indexes ..
I have tried with indexed table also, every query is ~ 3 times slower when linked server is involved.
I have tried to execute bunch of small ones that only selects one row realy fast, and they are also 3 times slower.
No metter what I do, ethernet speed is always ~120Mbps
And when I execute from SQL SERVER A:
select * into SQL_SERVER_A.NEWTABLE from
SQL_SERVER_A.TABLE
It takes about 1min;
select * into SQL_SERVER_A.TABLE from LINKED_SERVER_SQLB.DATABASE.SCHEMA.TABLE
I need about 2:30
So my guess was, maybe HDD cannot optimaze how to read/write since it is on different server.
But when I execute also from SQL SERVER A, also about 2:30
select * into LINKED_SERVER_SQLB.SCHEMA.NEWTABLE from LINKED_SERVER_SQLB.DATABASE.SCHEMA.TABLE
So I am inserting into same database. But time is different. So my guess is it is network issue. Speed newer goes above 120Mbps.
I have tried tracert and I get other machine in one hoop.
Is this network issue?
How can I investigate further?
I have tried with OPENQUERY results are the same.
Execution plan:

Related

Linked Server strangeness - Performance from Joins

I have an odd situation. We have an AS400 with a Prod side and a Dev side (same machine, 2 nic cards) From a production SQL Server, we run a query from a MS-SQL server that is using a linked Server, I'll call 'as400' The query does 3 joins, and the execution plan looks roughly like [Remote Query] => Got Results. It does the joins on the remote server (the Production AS400) This will execute in no more than 0:00:02 (2 seconds) One of the joined tables has 391 MILLION rows. It is pulling 100 rows - joined to the other table.
Now, it gets weird. On the Dev side of the same machine, running on a different SQL Server, coming in the other NIC card, executing the same query with a different database (the dev one) the execution plan is quite different! It is:
[Query 1] hash match (inner join) with [Query2] Hash with [Query3] Hash with [Query4]
Expecting that each query returns 10,000 rows (I'm sure it is just using that as a guess as it doesn't know the actual rows as it is remote). What it appears to be doing is pulling 391 million rows back on query2 and it takes > 23 HOURS before I give up and kill the query. (Running in SSMS)
Clearly, the MS SQL Server is making the decision to not pass this off to the AS400 as a single query. Why?
I can 'solve' the problem by using a OpenQuery (as400, cmd) instead, but then it will open us up to SQL Injection, can't do simple syntax checking on the query, and other things I don't like. It takes 6 seconds to do the query using OpenQuery, and returns the correct data.
If we solve this by rewriting all our (working, fast) queries that we use in production so they can also run against dev - it involves a LOT of effort and there is down-side to it in actual production.
I tried using the 'remote' hint on the join, but that isn't supported by the AS400 :-(
Tried looking at the configuration/versions of the SQL Servers and that didn't seem to offer a clue either. (SQL Servers are nearly the same version/are same, 10.50.6000 for the one that works, and 10.50.6220 for one that fails (newer), and also 10.50.6000 for the other one that is failing.)
Any clues anyone? Would like to figure this out, we have had several people looking at this for a couple of weeks - including the Database Architect and the IBM AS400 guru, and me. (So far, my OpenQuery is the only thing that has worked)
One other point, the MS Servers seem to be opening connections 5 per second to the AS400 from the machines that are not working (while the query runs for 23 hours) - I don't understand that, and I'm not 100% sure it is related to this issue, but it was brought up by the AS400 guy.
I despise linked servers for this reason (among many others). I have always had good luck with openquery() and using sp_executesql to help prevent SQL injection.
There is mention of this here: including parameters in OPENQUERY
Without seeing the queries and execution plans it sounds like this is a problem with permissions when accessing statistics on the remote server. For the query engine to make use of all available statistics and build a plan properly, make sure the db user that is used to connect to the linked server is one of the following on the linked server:
The owner of the table(s).
A member of the sysadmin fixed SERVER role.
A member of the db_owner fixed DATABASE role.
A member of the db_ddladmin fixed DATABASE role.
To check what db user you're using to connect to the linked server use Object Explorer...
Expand the Server\Instance > Server Objects > Linked Servers > right click your linked server and select properties, then go to the Security page.
If you're not mapping logins in the top section (which I wouldn't suggest) then select the last radio button at the bottom to make connections using a specific security context and enter the username and password for the appropriate db user. Rather than using 'sa' create a new db user on the linked server that is #2 or #3 from above and use that. Then every time the linked server is used it will connect with the necessary permissions.
Check the below article for more detail. Hope this helps!
http://www.benjaminnevarez.com/2011/05/optimizer-statistics-on-linked-servers/

Queries slow when run by specific Windows account

Running SQL Server 2014 Express on our domain. We use Windows Authentication to log on. All queries are performed in stored procedures.
Now, the system runs fine for all our users - except one. When he logs on (using our software), all queries take around 10 times longer (e.g. 30 ms instead of 2 ms). The queries are identical, the database is the same, the network speed is the same, the operative system is the same, the SQL Server drivers are the same, connection pooling is the same, DNS is the same. Changing computer does not help. The problem seems to be linked to the account being used.
What on Earth may be the cause for this huge performance hit?
Please advise!
I would try rebuilding the SP (by running an ALTER statement that duplicates its existing structure) to force SQL Server to recompile. I don't know every way SQL Server caches things but it can definitely create distinct execution plans for different types of connections so I wouldn't be surprised if your slow user is running a version with an inefficient execution plan.
http://www.sommarskog.se/query-plan-mysteries.html

SQL Server 2014 standard edition slows the machine when Database size grows

I have a scenario where an application server saves 15k rows per second in SQL Server database. At first initial hours machine is still usable but whenever the database size increases ~20gig, it seems that machine is becoming unusable.
I saw some topics/forums/answers/blogs suggesting to limit the max memory usage of SQL Server. Any thoughts on this?
Btw, using SQL Bulkcopy to insert rows in the database.
I have two suggestions for you:
1 - Database settings:
When you create the database, try to use a large initial size, and consider to have a bigger autogrowth percentage/size.
You will want to minimize the times your filegroups need to grow.
2 - Server settings:
In your SQL Server settings I would recommend that you remove one logical processor from the SQL Server. The OS will use this processor when the SQL Server is busy with heavy loads on the other processors. In my experience, this usually gives a nice boost to the OS .

How much is the network - determing network overhead in SQL Server

We have a dev server running C# and talking to SQL server on the same machine.
We have another server running the same code and talking to SQL server on another machine.
A job does 60,000 reads (that is it calls a stored procedure 60,000 times - each read returns one row).
The job runs in 1/40th of the time on the first server compared to it running on the second server.
We're already looking at the 'internal' differences between the two SQL Servers (fragmentation, tempdb, memory etc) but what's a good way to determine how much slower the second config is simply because it has to go over the network ?
[rather confusingly I found a 'SQL Server Ping' tool but it doesn't actually attempt any timing measurement which, as far as I can see, is what we need]
Open SQL Server Management Server on the remote machine. Start a new query. Click Query, Include Client Statistics. Run your stored procedure. In the Client Statistics tab of the results, you'll see some basic information about how many packets were sent back & forth over the network. My guess is that for one read, you're not going to see that much overhead.
To get a better idea, I'd try doing a plain select of 60,000 records (since you said it's returning 60,000 records one by one) over the network from your remote machine. Again, that doesn't give you an idea of the stored procedure overhead, but it'll give you a quick seat-of-the-pants idea of the network speed between machines.
SQL Server ships with the Profiler utility. This will tell you what the execution time of your query is on each of your SQL Server instances. Note any discrepencies. Whatever time (in the ExecutionTime column) can not be accounted for here is transmission time... or client display time. Perhaps your client machine takes longer to render the results, or compute the results.
What results are you expecting? Running everything on one machine vs over a network will certainly give you different timings. Your biggest timing difference will be the network throughput. You need to communicate to the networked server both ways.
If you can set NOCOUNT to on, this will help in less network traffic.

SQL Server 2005 64bit query blocking

We are experiencing some difficulties with SQL Server performance and wanted some help.
Our environment is: -
Windows 2003 Enterprise x64 Edition R2
Intel E5450 Quad Core 3ghz Processor
16GB RAM
SQL Server 2005 64bit Enterprise Edition (9.00.3282.00)
Database compatibility is 8 (but tested on 9 as well)
Hyperthreading is switched off
We have one database with a 1.2 million row table which is being queried (inefficiently), but is resulting in all 4 processors being saturated to the point where all other queries are blocked until the query is finished. This includes queries to separate databases and totally unrelated tables.
If the query is run with option MAXDOP 1 then all 4 cores run at 25% with the query taking 4 times as long, there is no blocking in this instance. In addition to this, we have run the same query on SQL 2000 and the response time is the same, but no CPU saturation.
We have a suspicion that the problem may be around contention over tempdb. In this particular instance we have a stored proc using a temp table and also the parallel query accessing the temp db I assume.
Obviously the standard response will be to re-write the queries. Firstly, this is not really a viable option and secondly this is only treating a symptom of the problem. Essentially the server is unable to process multiple requests, which is of great concern.
Does anyone know of any patches, config changes or known problems that might be causing this? Has anyone seen this before? Is it a 64bit nuance?
Regards
Lee
Sounds like the table isn't properly indexed. A table with 1.2 million rows shouldn't take anything to query. I've got tables with 60+ million rows and I query that table in milliseconds.
What's the query look like, and whats the table look like?
Sounds like locking on tempdb which effectively stops anything else that may use tempdb from running until it is finished. In particular it may be that the sysobjects table is locked.
The ideal step is to re-write the query to stop it locking tempdb for its entire duration, however, I'm guessing this is not easily possible.
You can try setting up a second instance of SQL to run this database. That way any temporary locking will only affect itself and not any other databases on the server.
To fix the problem for other queries running on the same database you can look into multiple files for the temp database.
However, once you're going this far for a solution you really need to go back to the problem query and try and make its footprint smaller. As Kristen commented, simply changing the way you create a temporary table can have a drastic effect on how things are locked.

Resources