SQL Server performance counter for incoming and outgoing bytes - sql-server

For a project I need to read the incoming and outgoing bytes per second of an SQL-Server (2012) instance of an instance or database (doesn't matter). For this I found the following performance counters:
SQL Server, Broker / DBM Transport Object
Receive I/O bytes/sec
Send I/O bytes/sec
When starting SQL Server Management Studio and executing some select statements the values of the performance counters are staying 0. While when I included the client statistics I see that Bytes sent from client and Bytes sent from server are not 0. I'm executing these select statements to a default installed installed on the same pc.
Does anyone how to solve this issue?
Thanks in advance

The documentation explains what SQL Server, Broker / DBM Transport Object measures:
The Broker / DBM Transport performance object contains performance
counters that report networking information for Service Broker and
database mirroring.
There is no performance counter for Transact-SQL traffic. If it helps, the DMV sys.dm_exec_connections will aggregate the traffic size for a connection. If the traffic occurs over a network interface then you could use the system network counters, that is the Network Interface Object. But a local test would not register anything because the connection will use shared memory protocol.
That being said, it is unusual to have to measure SQL Server Transact-SQL network traffic. If the question ever arise, then you're doing it wrong. Network traffic should always be negligible. The dimension everybody is interested is IO, for which there is support in SQL Server, Buffer Manager Object, SQL Server, Databases Object and DMVs like sys.dm_io_virtual_file_stats.

Related

Go database client killing more connections to SQL Server than expected

As part of debugging some other issues on our server I noticed some really odd behavior with respect to connections that I'm hoping to understand.
I have 2 go servers, one of which talks to a SQL Server RDS instance, and another that talks to a managed SQL Server instance in Azure.
I believe there is a slight difference in the way the 2 backends work - RDS has a single port (1433) on which the client authenticates and subsequently establishes the connection. Azure SQL seems to authenticate on port 1433 and then redirect the client to another service that actually handles the connections.
In both cases I've got substantial load running against the servers. At least 500 requests/s, with peaks of about 2k req/s. Each of these requests results in a Select query which returns a single row with a primary key lookup - so really short lived connections to SQL. The average time per query is 50-80ms on both, with p95 in the 100-150ms range
Behavior I'm trying to understand:
I'm using the go database/sql driver with an MS-SQL implementation (Specifically go-mssqldb).
I've set Max Idle connections and Max Open connections to 64.
What I would expect: 64 long running Established connections that are occasionally idle but quickly reused.
What I'm seeing: Generally 64 Established connections, with the number often dropping down to somewhere between 50 and 64. This also results in 200-400 connections in the TIME_WAIT state at any given time.
What could be causing this behavior? It is just the fact that the go driver lazily closes connections? If so why would the number drop below 64?
I'm happy to provide any more details!

Persistent time out issues copying very large tables across SQL Server 2012 instances on Amazon RDS

To give some context, I'm currently running a SQL Server 2012 instance on Amazon RDS and I've had to move to a larger instance twice already. The first time SQLAzureMW was the way to go, but at the time no table was that significantly large. The second time, SQLAzureMW always timed out the source server on the bcp command with large tables (a few over 5 GB). Similarly, SSIS Import / Export Wizard also timed out. I found the source server was always the problem so I tried increasing the instance's class from an m1.medium to an m1.xlarge to no avail, the source server still always timed out before making any significant progress on the large tables.
In the end I ended up writing my own .NET program that simply ran a "SELECT * FROM [table] ORDER BY [id] OFFSET {0} ROWS" on the large source tables and pushed the results into SQLBulkCopy on the destination server. Again the source server timed out repeatedly but I wrapped the try and catch statements in a loop that would simply resume the query from the last point where SQLBulkCopy. That being said, I'm not exactly thrilled with this solution.
I'm considering building a solution around the Microsoft.SqlServer.Management.Smo.Transfer class but I'm afraid there might be the same problems with lack of recovery from a broken source connection.
I'd much rather an out of the box solution for this like SQLAzureMW was before tables got too large and that I'd expect SSIS Import Export Wizard to be. There has to be a better way.
We were running into a similar situation: running SQLAZureMW on an Window server 2012 EC2 instance connecting to SQL Server 2012 RDS Instance. AWS support suggested the following changes on our EC2 instance and it seems to have solved all of our issues:
Increase TCP/IP timeout value as described here (i'm not sure this is actually necessary) http://docs.aws.amazon.com/redshift/latest/mgmt/connecting-firewall-guidance.html
Disabling all TCP offloading for the network adapter.
Instructions from AWS:
Here are the steps to disable TCP Offloading: Go to the properties of
the Citrix PV ethernet adapter Click Configure Go to Advanced Disable
all of the following Properties:
IPv4Checksum Offload Large Receive Offload (IPv4), Large Send Offload
Version 2(IPv4), TCP Checksum Offload (IPv4), UDP Checksum Offload
(IPv4)
Then as a final step run the following command from the command
prompt:
netsh int ip set global taskoffload=disabled
netsh int tcp set global chimney=disabled
netsh int tcp set global rss=disabled
netsh int tcp set global netdma=disabled
This issue has been known and reported to MSFT. The problem here is not with SQL Server (your source). The NIC drivers for the network card have a feature called TCP chimney which offloads the bulk data movement from the CPU to the network card. i.e For large data movement, the CPU does not get involved and rather relies on the network card to process the data. But while doing so, the NIC card some times runs out of memory (known bug).
You can simply turn off the Chimney feature off and give it another try. If your source is a production box, you may want to create a backup of the DB before doing anything with that machine (just to be on the safe side). People have reported resolving this problem by turning the feature off. Here is a link you can follow.
I thought I answered this but it turns out the problem was the instances I chose. I believe the m1 class of instances shared the same hardware network device for SAN storage and networking. The result being that enough network activity caused the system drive, and thus the virtual memory, to become inaccessible at least for an instant. Spending the money on newer hardware, m2 and above, solved the problem.

What mechanism to be used for asynchronous communication between two SQL Servers in the case?

We use a central SQL Server (2008 Standard edition) and several smaller, dedicated SQL Servers (Express editions). We need to implement some mechanism for transferring data asynchronously* from the dedicated decentralized SQL Server (bigger volume, see below) and back from the central SQL Server (few records, basically some notifications for the machines and possibly some optimization hints).
The dedicated SQL Servers are physically located near technology machines, and they are collecting say datetime, temperature rows in regular intervals (think about few seconds interval). There are about 500 records for one job, but the next job follows immediately (the machine does not know it is a new job--being quite stupid in the sense -- and simply collects the temperatures on and on).
The technology machines must be able to work without the central SQL Server, and the central SQL Server must work also when the machine is not accessible (i.e. its dedicated SQL engine cannot be reached, switched off with the machine). In other words, the solution need not to be super fast, but must be robust in the sense that no collected data is lost.
The basic idea is to move the collected data from the dedicated SQL Server (preprocessed to the normalized format with ID of the machine) to the well known table on the central SQL Server. Only the newer data should be sent to minimize the amount of the data. That transfer should be started by the dedicated SQL Server in regular intervals (say one hour) if the connection is OK. If the connection is not OK, the data will be sent after next hour, etc.
Another well known table on the central SQL Server will be used to send notifications for the dedicated SQL Server engines. This way the dedicated engine can be told (for example) what data was already processed/archived on the central SQL Server (i.e. the hint for what records may already be deleted from the local database on the dedicated machine), or whatever information that is hinted from the central (just hints or other not the real-time requirements). The hints will be collected by the dedicated SQL Server (i.e. also the machine responsibility). In other words, the central SQL Server only processes the well known, local tables. It does not try to connect the dedicated SQL Server machines.
The solution should use only the standard mechanisms -- SQL commands (via stored procedures), no external software. What kind of solution should I focus on?
Thanks,
Petr
[Edited later] The SQL servers are at the same Local Area Network.
If you are willing to make a mental switch and stop thinking in terms of tables and rows and instead think in terms of data and messages then Service Broker can do handle all the communication, delivery and message processing. Instead of locally (on the Express machines) doing INSERT INTO LocalTable(datetime, temperature) VALUES (...) you think in terms of:
BEGIN CONVERSATION WITH CentralServer ...;
SEND ON conversation MESSAGE TYPE [Measurement] (<datetime...><temperature ...>)
See Using Service Broker instead of Replication or High Volume Contiguous Real Time ETL
Sounds like a job for merge replication.

does connections count matter?

We have an application that uses NHibernate to connect to our database on SQL Server.We use connection pooling and session per request approach to execute our queries over SQL Server.
We used SQL Server Activity Monitor to monitor connections count and noticed there was 25-30 connections involved whenever a user logged in to system.
So here's my question to ask : can large number of connections to SQL Server leads to performance issues?
Each connection to SQL Server requires the allocation of certain amount of memory and so there is a performance consideration in this regard.
In the scheme of things however, 20-30 connections is a very small number.
Have you validated that all connections belong to your application? The reason I ask is because SQL Server itself will establish and maintain a certain number of connections/sessions as part of the servers overall operation.
Some usefull DMV's for you to monitor:
select * from sys.dm_exec_connections
select * from sys.dm_exec_sessions
Session ID's above 51 are from outside of SQL Server so to speak, i.e. user sessions.
Further to comments:
SQL Server 2005 can support up to 32,767 connections. To check your capacity execute:
select ##MAX_CONNECTIONS
If connection pooling is being used then connections will remain open and in a sleep state until required for processing requests. Alternatively, perhaps the application is not closing connections when requests have finished processing.
I can only comment from a SQL Server perspective as I am not familiar with the mechanics of NHibernate.

Fastest SQL Server protocol?

What is the fastest SQL Server connection protocol?
Related: which protocols are available remote versus local, and does that affect the choice of fastest protocol?
VIA. This is the fastest SQL Protocol, it runs on dedicated hardware and is used in doing SQL Server benchmarked records.
Note that the VIA protocol is deprecated
by Microsoft, and will be removed in a
future version of Microsoft SQL Server.
It is however supported in SQL Server 2008,
SQL Server 2008 R2 and SQL Server 2012.
Shared Memory is next as performance, but it only works between a client and a server that can actually share memory, so local only.
For remote connectivity on ordinary hardware, TCP is the way to go. Under normal operations, it has the same performance as Named Pipes. On slow or busy networks, it outperforms NP in robustness and speed, a fact documented in MSDN:
For named pipes, network
communications are typically more
interactive. A peer does not send data
until another peer asks for it using a
read command. A network read typically
involves a series of peek named pipes
messages before it starts to read the
data. These can be very costly in a
slow network and cause excessive
network traffic, which in turn affects
other network clients.
Named Pipes also can lead to client connect time outs:
TCP/IP Sockets also support a backlog
queue. This can provide a limited
smoothing effect compared to named
pipes that could lead to pipe-busy
errors when you are trying to connect
to SQL Server.
Unfortunately the normal client configuration tries NP first and this can cause connectivity problems (for the reasons cited above), where enforcing TCP on client network config (or in connection string, via tcp:servername) skips the NP connect attempt and goes straight to TCP for a much better experience under load.
Now is true that the same link I quoted above goes on to praise NP for its easy of configuration, most likely referring to no need to open SQL TCP port in firewall, but is there where me and BOL have different views.
Shared memory is fastest for local (client and server on same machine). Named pipes is probably 2nd fasted for local. For remote everyone is using TCP-IP and the remaining protocols are kind of turning into networking history.
Using Shared Memory Protocol
The network libraries you choose when installing SQL Server can affect the speed of communications between the server and its clients. Of the three key network libraries, TCP/IP is the fastest and Multi-Protocol is the slowest. Because of the speed advantage, you will want to use TCP/IP on both your servers and clients. Also, don't install unused network libraries on the server, as they only contribute unnecessary overhead**
Named Pipes is the fastest SQL Server protocol.

Resources