I am using this command https://docs.dbatools.io/#Test-DbaNetworkLatency to test network latency with SQL Server 2016. And it gives me 100ms network latency result (from NetworkOnlyTotal output). However, if I ping the sql server instance I get only 11ms. I wonder what causes the extra 90ms latency in SQL Server. Is it expected? Or what configuration should I look at?
I tried with the -Count parameter and found that the NetworkOnlyTotal doesn't change too much or sometimes even dropped. Does this value mean average?
See below two examples, one is to run query 1 time while the other is to run the query 10 times. The result about NetworkOnlyTotal is even better for 10 times query. From its name, it looks like it is the total time of the 10 requests. But why is the value dropping?
Test-DbaNetworkLatency -SqlCredential $credential -SqlInstance $instance -Count 1
output:
ExecutionCount : 1
Total : 141.55 ms
Average : 141.55 ms
ExecuteOnlyTotal : 69.13 ms
ExecuteOnlyAverage : 69.13 ms
NetworkOnlyTotal : 72.42 ms
Test-DbaNetworkLatency -SqlCredential $credential -SqlInstance $instance -Count 10
output:
ExecutionCount : 10
Total : 180.33 ms
Average : 18.03 ms
ExecuteOnlyTotal : 127.38 ms
ExecuteOnlyAverage : 12.74 ms
NetworkOnlyTotal : 52.95 ms
I wonder what causes the extra 90ms latency in SQL Server. Is it expected?
Probably the one-time connection stuff.
1) Establishing a TCP/IP session
2) Negotiating connection protocol encryption
3) Logging in and creating a session
Try a higher -Count Establishing a connection and a session take some time, and shouldn't really be counted as "network latency", as clients will hold open and reuse connections.
The description of the product indicates that "It will then output how long the entire connection and command took, as well as how long only the execution of the command took." Additionally, it says that it will execute the command three times. And the tool will need to take a little time to authenticate the connection with SQL Server. So, it seems reasonable to me.
Related
That's a title and a half, but it pretty much summarises my "problem".
I have an Azure Databricks workspace, and a an Azure Virtual Machine running SQL Server 2019 Developer. They're on the same VNET, and they can communicate nicely with each other. I can select rows very happily from the SQL Server, and some instances of inserts work really nicely too.
My scenario:
I have a spark table foo, containing any number of rows. Could be 1, could be 20m.
foo contains 19 fields.
The contents of foo needs to be inserted into a table on the SQL Server also called foo, in a database called bar, meaning my destination is bar.dbo.foo
I've got the com.microsoft.sqlserver.jdbc.spark connector configured on the cluster, and I connect using an IP, port, username and password.
My notebook cell of relevance:
df = spark.table("foo")
try:
url = "jdbc:sqlserver://ip:port"
table_name = "bar.dbo.foo"
username = "user"
password = "password"
df.write \
.format("com.microsoft.sqlserver.jdbc.spark") \
.mode("append") \
.option("truncate",True) \
.option("url", url) \
.option("dbtable", table_name) \
.option("user", username) \
.option("password", password) \
.option("queryTimeout", 120) \
.option("tableLock",True) \
.option("numPartitions",1) \
.save()
except ValueError as error :
print("Connector write failed", error)
If I prepare foo to contain 10,000 rows, I can run this script time and time again, and it succeeds every time.
As the rows start dropping down, the Executor occasionally tries to process 4,096 rows in a task. As soon as it tries to do 4,096 in a task, weird things happen.
For example, having created foo to contain 5,000 rows and executing the code, this is the task information:
Index Task Id Attempt Status Executor ID Host Duration Input Size/Records Errors
0 660 0 FAILED 0 10.139.64.6 40s 261.3 KiB / 4096 com.microsoft.sqlserver.jdbc.SQLServerException: The connection is closed.
0 661 1 FAILED 3 10.139.64.8 40s 261.3 KiB / 4096 com.microsoft.sqlserver.jdbc.SQLServerException: The connection is closed.
0 662 2 FAILED 3 10.139.64.8 40s 261.3 KiB / 4096 com.microsoft.sqlserver.jdbc.SQLServerException: The connection is closed.
0 663 3 SUCCESS 1 10.139.64.5 0.4s 261.3 KiB / 5000
I don't fully understand why it fails after 40 seconds. Our timeouts are set to 600 seconds on the SQL box, and the query timeout in the script is 120 seconds.
Every time the Executor does more than 4,096 rows, it succeeds. This is true regardless of the size of the dataset. Sometimes it tries to do 4,096 rows on 100k row sets, fails, and then changes the records in the set to 100k and it immediately succeeds.
When the set is smaller than 4,096, the execution will typically generate one message:
com.microsoft.sqlserver.jdbc.SQLServerException: The connection is closed
and then immediately work successfully having moved onto the next executor.
On the SQL Server itself, I see ASYNC_NETWORK_IO as the wait using Adam Mechanic's sp_whoisactive. This wait persists for the full duration of the 40s attempt. It looks like at 40s there's an immediate abandonment of the attempt, and a new connection is created - consistent with the messages I see from the task information.
Additionally, when looking at the statements, I note that it's doing ROWS_PER_BATCH = 1000 regardless of the original number of rows. I can't see any way of changing that in the docs, but I tried rowsPerBatch in the option for the df, but didn't appear to make a difference - still showing the 1000 value.
I've been running this with lots of different amounts of rows in foo - and when the total rows is greater than 4,096 my testing suggests that the spark executor succeeds if it tries a number of records that exceeds 4,096. If I remove the numPartitions, there are more attempts of 4,096 records, and so I see more failures.
Weirdly, if I cancel a query that appears to be running for longer than 10s, and immediately retry it - if the number of rows in foo is != 4,096, it seems to succeed every time. My sample is obviously pretty small - tens of attempts.
Is there a limitation I'm not familiar with here? What's the magic of 4,096?
In discussing this with my friend, we're wondering whether there is some form of implicit type conversions happening in the arrays when they're <4,096 records, which causes delays somehow.
I'm at quite a loss on this one - and wondering whether I just need to check the length of the DF before attempting the transfer - doing an iterative cursor in PYODBC for fewer rows, and sticking to the JDBC connector for larger numbers of rows. It seems like it shouldn't be needed!
Many thanks,
Johan
I have a capacity test running on a S4 Azure database, and I am getting this error:
Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
I have 500 "users" hitting my site. My connectionstring is this
Server=tcp:database.net,1433;Initial Catalog=database-prd;Persist Security Info=False;User ID=username;Password=password;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;
I have checked my code and what I do is:
Using "using"
using (SqlConnection connection = new SqlConnection(_connectionString))
{
connection.Open();
//... logic
}
Scoped repository
serviceCollection.AddScoped<IRepository, SqlServerRepository>();
I am now thinking of the default Max Pool Size. I haven't set it on the connectionstring. Should I do this? I have an S4, and the properties is:
Max concurrent sessions: 4800
Max concurrent Workers (requests): 200
according to this: https://learn.microsoft.com/en-us/azure/sql-database/sql-database-dtu-resource-limits-single-databases#standard-service-tier-continued
What should I set the pool size to? Does it even matter? As I have understand it "Max Pool Size" is a client side and it defaults to 100. I could try to raise it a bit to maybe 500 or 800.
Where it max out is on some pretty simple selects
select p1,p2,p3 from baskets where Id=1234
and the same for the lines. Not to complex. The only complex query I have has like 4 or 5 joins, but it is not hit that much.
Do anyone here have some points to Max Pool Size? Does it even matter?
Overall Database performance is slow in on of our production environment.
Herewith I have attached the statspack reports of two time periods generated on 15/02/16 between 09.00AM - 02.00PM and 03.00PM - 07.00PM GMT .
DB details:
Oracle 11g 11.2.0.3.0 - Standard Edition
OS memory: 11.2GB
the current database SGA and PGA size is :
sga_max_size : 5G
sga_target : 5G
pga_aggregate_target : 1G
db_cache_size : 2080M
memory_max_target : 0
memory_target : 0
Please advice on this.
Ram
Run an AWR Report using dbms_workload_repository (for an html output use AWR_DIFF_REPORT_HTML function) or Oracle Enterprise Manager and check what's the things which are taking the most db time / cpu time / io ops etc.
There can be dozens of different causes for your not-that-specific issue.
Regarding the SGA/PGA specifically -
you can also just query gv$sga_target_advice, gv$pga_target_advice and check if there's a lack of memory in some of the pools (the more advanced and precise option is gv$sga_resize_ops).
In order to send automated job failure notifications from SQL Server Agent, you must configure Database Mail and then to to SQL Agent properties > Alert System > Mail Session > Enable mail profile to configure the mail system and mail profile to be used to send the email notifications. We have many servers, and would like to setup a central cross-server job that ensures that the Enable mail profile option is checked across various servers, because otherwise, the scheduled jobs fail silently without sending an email notification.
Is there a supported way to query the msdb database to get to these settings using T-SQL (or by some other way programmatically)?
Running a SQL Profiler trace while bringing up the properties page in the SQL Server Agent UI shows references to msdb.dbo.sp_get_sqlagent_properties which is an undocumented procedure (I would prefer to use documented objects to help future-proof our solution), and several calls to master.dbo.xp_instance_regread which I would imagine the reg keys could change per each SQL Server instance installation.
Does anyone know of a way to query to check whether the enable mail profile option is configured, and also retrieve the mail profile that is designated in the SQL Agent Alert System configs? Most of our servers are SQL Server 2008 R2, with some SQL 2012. I would prefer SQL 2008+ support.
Thanks in advance!
Future-proofing is a very wise idea :). As you noted, both xp_regread and xp_instance_regread are also undocumented. And http://social.msdn.microsoft.com/Forums/sqlserver/en-US/b83dd2c1-afde-4342-835f-c1debd73d9ba/xpregread explains your concern (plus, it offers you an alternative).
Your trace and your run of sp_helptext 'sp_get_sqlagent_properties' are a good start. The next thing to do is run sp_helptext 'sp_helptext', and note its reference to sys.syscomments. BOL sys.syscomments topic redirects you to sys.sql_modules, and that points to the next step. Unfortunately for your needs, just one row (for 'sp_get_sqlagent_properties') will be returned by running USE msdb; SELECT object_name(object_id) FROM sys.sql_modules WHERE definition LIKE '%sp_get_sqlagent_properties%'. I thus assume you are out of luck - there appears to be no alternative, publicly documented, module (sproc). My assumption could be wrong :).
I deduce that xp_reg% calls exist for client (SMO, SSMS, etc.) needs, such as setting/getting agent properties. More importantly (for your needs), your sp_helptext run also reveals SSMS (a client) is using a registry store (i.e. not a SQL store). Unfortunately, I must deduce (based upon an absence of proof from a library search) that those keys (and their values) are also not documented...
The above appears to put you in a pickle. You could decide "if we are going to rely upon undocumented registry keys, we might as well rely on the undocumented calls to read them", but I won't recommend that:). You could also file a feature request at https://connect.microsoft.com/ (your need is clear), but because your need concerns a client-side feature request, I do not recommend holding your breath while waiting for a fix :).
Perhaps it is time to step back and take a look at the bigger picture:
How often can that key be changed, and how often will this process poll for that change?
Email uses a mail primitive. Sender: "Dear recipient, did you get my mail?" Recipient: "Dear sender, did you send me mail?" Disabling an email profile is not the only reason for an email failure.
Would a different approach be more useful, when compared to periodically checking a key?
One approach would be to periodically send "KeepAlive" email. If the "KeepAlive" email isn't periodically received, maybe that key was tweaked, maybe the Post Office is on a holiday, or maybe something else (equally bad) happened. Plus, this approach should be fully supported, documented, and be future-proof. Who knows how and what keys will be used in the next version of SQL Server Agent?
The first bullet isn't addressed (neither would it be addressed by periodically checking a key), and perhaps you have additional needs (worth mentioning on MS connect).
I finally found a way to do this using PowerShell and Microsoft.SqlServer.Management.Smo.Agent.JobServer.
Here is the PowerShell script I wrote to check if SQL Agent mail alerts are enabled, and to make sure that SQL Agent is set to Auto startup when the server reboots. This works works with local or remote SQL instances.
# usage examples
Check-SQLAgentConfiguration -InstanceName 'localhost\sql2014'
Check-SQLAgentConfiguration -InstanceName 'RemoteServerName'
function Check-SQLAgentConfiguration{
param([string]$InstanceName='localhost')
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null;
$smosrv = new-object Microsoft.SqlServer.Management.Smo.Server($InstanceName);
$smosrv.ConnectionContext.ConnectTimeout = 5; #5 seconds timeout.
$smosrv.ConnectionContext.ApplicationName = 'PowerShell Check-SQLAgentConfiguration';
"Server: {0}, Instance: {1}, Version: {2}, Product Level: {3}" -f $smosrv.Name, $smosrv.InstanceName, $smosrv.Version, $smosrv.ProductLevel;
# NOTE: this does not seem to ever fail even if SQL Server is offline.
if(!$smosrv){"SQL Server Connection failed"; return $null;}
$agent = $smosrv.JobServer;
if(!$agent -or $agent -eq $null){
throw "Agent Connection failed";
return -2;
}
$agentConfigErrMsg = "";
if($agent.AgentMailType -ne "DatabaseMail"){ $agentConfigErrMsg += " AgentMailType: " + $agent.AgentMailType + "; "; }
if(!$agent.DatabaseMailProfile){$agentConfigErrMsg += " DatabaseMailProfile: " + $agent.DatabaseMailProfile + "; ";}
if($agent.SqlAgentAutoStart -ne "True"){$agentConfigErrMsg += " SqlAgentAutoStart: " + $agent.SqlAgentAutoStart + " ServiceStartMode: " + $agent.ServiceStartMode + "; "; }
if($agentConfigErrMsg.length -gt 0){
$agentConfigErrMsg = "Invalid SQL Agent config! " + $agentConfigErrMsg;
throw $agentConfigErrMsg;
return -1;
}
<##
#for debugging:
"Valid: "
"AgentMailType:" + $agent.AgentMailType;
"DatabaseMailProfile: " + $agent.DatabaseMailProfile;
"ServiceStartMode: " + $agent.ServiceStartMode;
"SqlAgentAutoStart: " + $agent.SqlAgentAutoStart;
#"SqlAgentMailProfile: " + $agent.SqlAgentMailProfile;
#>
return 0;
}
SQL 2008R2 uses Service-broker like queues for mail processing (http://technet.microsoft.com/en-us/library/ms175887%28v=sql.105%29.aspx). In our environments I check that the corresponding queue exists and is active.
SELECT * FROM msdb.sys.service_queues
WHERE name = N'ExternalMailQueue'
AND is_receive_enabled = 1;
This table is listed online (http://technet.microsoft.com/en-us/library/ms187795%28v=sql.105%29.aspx).
Testing shows that this makes the required transition as we went from new instance -> enabled mail -> mail switched off and back again.
How can you find out what are the long running queries are on Informix database server? I have a query that is using up the CPU and want to find out what the query is.
If the query is currently running watch the onstat -g act -r 1 output and look for items with an rstcb that is not 0
Running threads:
tid tcb rstcb prty status vp-class name
106 c0000000d4860950 0 2 running 107soc soctcppoll
107 c0000000d4881950 0 2 running 108soc soctcppoll
564457 c0000000d7f28250 c0000000d7afcf20 2 running 1cpu CDRD_10
In this example the third row is what is currently running. If you have multiple rows with non-zero rstcb values then watch for a bit looking for the one that is always or almost always there. That is most likely the session that your looking for.
c0000000d7afcf20 is the address that we're interested in for this example.
Use onstat -u | grep c0000000d7afcf20 to find the session
c0000000d7afcf20 Y--P--- 22887 informix - c0000000d5b0abd0 0 5 14060 3811
This gives you the session id which in our example is 22887. Use onstat -g ses 22887
to list info about that session. In my example it's a system session so there's nothing to see in the onstat -g ses output.
That's because the suggested answer is for DB2, not Informix.
The sysmaster database (a virtual relational database of Informix shared memory) will probably contain the information you seek. These pages might help you get started:
http://docs.rinet.ru/InforSmes/ch22/ch22.htm
http://www.informix.com.ua/articles/sysmast/sysmast.htm
Okay it took me a bit to work out how to connect to sysmaster. The JDBC connection string is:
jdbc:informix-sqli://dbserver.local:1526/sysmaster:INFORMIXSERVER=mydatabase
Where the port number is the same as when you are connecting to the actual database. That is if your connection string is:
jdbc:informix-sqli://database:1541/crm:INFORMIXSERVER=crmlive
Then the sysmaster connection string is:
jdbc:informix-sqli://database:1541/sysmaster:INFORMIXSERVER=crmlive
Also found this wiki page that contains a number of SQL queries for operating on the sysmaster tables.
SELECT ELAPSED_TIME_MIN,SUBSTR(AUTHID,1,10) AS AUTH_ID,
AGENT_ID, APPL_STATUS,SUBSTR(STMT_TEXT,1,20) AS SQL_TEXT
FROM SYSIBMADM.LONG_RUNNING_SQL
WHERE ELAPSED_TIME_MIN > 0
ORDER BY ELAPSED_TIME_MIN DESC
Credit: SQL to View Long Running Queries