time difference between two machines makes problem in service broker communication - sql-server

I have two machines and each machine has a sql instance.between these machines i run service broker ,but one of my machine has a different time from another i mean the machine time of A is 9:00 o'clock and another one is 11:00 o'clock .so the difference between two times makes the message can't be received from another machine , so when i sync both times with the same time the messages are received .
so my question is how can i configured the service broker to skip the time difference.

You can't. The requirements of security protocols dictate a limit on the time skew between machines.
Use time zones to set your machines to 9:00 vs. 11:00.

Related

What might lead to a sudden increase in ASYNC_NETWORK_IO wait time?

At approximately 2021-06-14 02:00 (UTC) my website began performing very poorly (some requests taking >10s). An analysis of the logs shows that the slowdown only affects routes which establish a DB connection. When I look at the DB, I can see that I am experiencing high Network I/O (ASYNC_NETWORK_IO) wait times.
My understanding is that this can point to a couple problems. Either there is an issue with the network, or more likely, a problem with the application consuming the DB results.
https://www.sqlshack.com/reducing-sql-server-async_network_io-wait-type/
Unfortunately, when I investigate any further, both the network and the client seem to be fine.
Problem with the network?
ping < 1ms
Utilizing a small fraction of 10GbE bandwidth
Auto-Negotiation is detecting the network bandwidth properly
Web server / Application processing results too slowly?
CPU usage is fine
No code changes when slowdown began
Request queue emptied quickly
Is there any other explanation for why I might see a sudden increase in ASYNC_NETWORK_IO wait time?
Update
Most resources (like the one I linked above) say that a problem with SQL Server itself is unlikely. However, when I switched out my application's connection string to use my test db (located on a different server), the site performance went back to normal, and no ASYNC_NETWORK_IO wait times could be seen in my test db.
If the issue was with the web server or the client application, I would have expected to see the connection to the test db cause high ASYNC_NETWORK_IO wait times there as well.
Does anyone have any ideas about what could be causing this from the SQL Server side?

Oracle-DB: what are the CPU costs of session connect/disconnect

A generally assessed poor technique is to create an own database session for every atomic DB activity.
You may sometimes encounter such strategies like:
processing a large amount of items in a loop, each processing step in the loop creates a DB session, executes a small set of SQL statements and terminates the session
a polling process checks a SQL result one time a second, each in a new DB session
But what costs are generated by frequently connecting and disconnecting DB session?
The internal recording of database activity (AWR/ASH) has no answer because establishing the DB connection is not a SQL activity.
The superficial practical answer depends how you define 'connection' - is a connection what the app knows as a connection, or is it the network connection to the DB, or is it the DB server process & memory used to do any processing? The theoretical overall answer is that the process of establishing some application context and starting a DB server process with some memory allocation included - and then doing the reverse when the app has finished running SQL statements - is 'expensive'. This was measured in Peter Ramm's answer.
In practice, long running applications that expect to handle a number of users would create a connection pool (e.g. in Node.js or in Python). These remain open for the life of the application. From the application's point of view, getting a connection from the pool to do some SQL is a very quick operation. The initial cost (a few seconds of startup at most) of creating the connection pool can be amortized over the process life of the application.
The number of server processes (and therefore overhead costs) on the database tier can be reduced by additional use of a 'Database Resident Connection Pool'.
These connection pools have other benefits for Oracle in terms of supporting Oracle's High Availability features, often transparently. But that's off topic.
A simple comparison of system load gives a fuzzy hint to the price of connection creation.
Example:
An idle database instance on a single host with 4 older CPU cores (Intel Xeon E312xx, 2,6 GHz)
a external (not on DB host) SQLPlus client which executes a single "SELECT SYSTIMESTMP FROM DUAL" per DB session
Delay between the SQLPlus calls is time so that 1 connection per second is created and destroyed.
6 Threads active each with 1 session creation per second
Result:
with idle database CPU load over 4 CPU nodes is in average 0.22%
with 6 threads creating and destroying sessions each second CPU load is 6.09%
io wait also occurs with 1.07% in average
so in average 5.87% of 4 CPU nodes are allocated by this 6 threads
Equivalent to 23.48% of one CPU node for 6 threads or 3,91% per thread
That means:
Connecting and disconnecting an Oracle DB session once per second costs approximately 4% of a CPU core of DB server.
This value in mind should help to consider if it's worth to change process behavior regarding session creation or not.
p.s.: This does not consider the additional cost of session creation at client side.

Fixing an old ntp server with no leap second

I have to take care of an old STRATUM1 (GPS source) ntp server. It is installed with Free BSD 8.0 and ntp-4.2.6p3 and has an uptime of 1800+ days. It is used to provide time to many other servers in the datacenter.
Comparing the accuracy of this server's time with other public ntp servers, this one is -1 second ahead. The ntp.conf has no leapfile reference so I think this server has not taken into account the leap seconds.
The question is, can it be fixed?. Is there a way to add the leap second know?. I have tried to add an updated leap-second file to the configuration and restart the service, but no luck, offsets is always the same (-1 second).

Read/Write into a database for a particular rate

I'm writing a multithreaded program to check the performance of a database server. In my program, one thread should access(read/wright) the database for a given rate(eg. 100 times per second) But my problem is, how can I access the database for the given rate. How can I send 1000 requests in one second to the database.

connection timeout

My method executes lots of asynchronous SQL requests and I constantly get connection timeout exceptions. What else can I do except increasing the connection timeout value and proper indexing? I mean with the database part, not with the code part. Can't change the code part. Besides, the application is running fine on different servers, but only I experience those timeout exceptions on my pc and local MS SQL Server 2008 R2 database (which is also on the same PC). So I think this is clearly a performance issue since the connection timeout is already set to 3 minutes. Maybe there is something I can change on the server? Maybe there is a number of simultanious requests constraint? Each of my requests needs clearly less that 3 minutes, but there are about 26 000 of them running asynchroniously, and only I experience those problems on my local PC and local DB.
I've run the process monitor and I see that at the time when my code starts the SQL Server eventually consumes 200 MB of RAM and takes up about a half of CPU processing time. But I still have 1 GB of RAM free, so this is not a memory problem.
I think the number of connection can be the cause. Make sure you close the connection properly or try to reduce the amount of them. You can also use pipes, which will overcome the limitations of usual connections.

Resources