Does SQL Server share sessions? - sql-server

In SQL Server 2014 I open 3 sessions on the same database. In the first session I run Update Statistics A. I time this to take around 1 minute.
In my 2nd and 3rd sessions I run an Update Statistics B (one at a time). Each takes about 1 minute as well.
I then run Update Statistics A on session 1, and Update Statistics B on session 2, both at the same time. Each query finishes in around 1 minute, as expected.
I then run Update Statistics A on window 1, and Update Statistics B on window 3, both at the same time. Each query takes close to 2 minutes now.
I checked sp_who2 and can see 3 distinct sessions here. What could be a possible cause for this?
Also, when I check the query status I noticed in the scenario where I run queries on windows 1 and 3, one status is always running while the other is either runnable or suspended. In the other scenario where I run on windows 1 and 2 both are always running.

Sessions are always different. You can see the SPID in the bottom of the SQL Server Management Studio Window or you can do a:
SELECT ##SPID
to see the session number. SQL Server is a multiple access simultaneous system which manages it own sessions and threads. The number of active sessions is dependent on what the sessions are doing and long a session has been doing something and how much resources the machine that SQL Server is running on - typically more memory/CPUS will result in more active simultaneous sessions. Of course there could be a few heavy sessions that use all the resource of the machine.
Sessions are somewhat isolated but depending on the locking being used the sessions are somewhat, or very, isolated from each other. SQL Server will decide when to temporarily suspend a session so other sessions can have a turn using the CPU/Memory. If a whole bunch of sessions are all updating the same table there may be contention that will slow SQL Server down. What you are asking is how long does it take to drive a 100 miles. Depends of course.
If you have three sessions doing the same thing SQL Server will not fold up the three actions into one. It will do each one (simultaneously if possible) or serially.

Related

max. queries running parallel in SQL Server

We have a C# app using Hangfire to run report on SQL Server.
Hangfire simply runs tasks parallel. Currently it is set up to run 20 reports parallel, and that's what I see on the dashboard, 20 reports are running, and some are waiting.
but if I open SQL Server, I see that only 2 report-related queries are actually running. And I usually see 1 or 2 report queries in "suspended" or "runnable" status.
What is the reason of this? Is it because somehow SQL Server thinks that more parallelism wouldn't help? Or is it because of configuration? I couldn't really find anything relevant, articles are usually talk about "Max. degree of parallelism", but that is the level of parallelism within the query. It is set to 0.
thanks

Rogue Process Filling up Connections in Oracle Database

Last week we updated our DB Password and ever since after every db bounce the connections are getting filled up.
We have 20+ schema and connections to only one Schema gets filled up. Nothing shows up in the sessions. There can be old apps accessing our database with old password and filling up connections.
How to identify how many processes are trying to connect to DB server and how many are failed.
Every time we bounce our db servers connections go through post 1hr no one else can make new connections.
BTW: in our company, we have LOGON and LOGOFF triggers which persist the session connect and disconnect information.
It is quite possible that what you are seeing are recursive sessions created by Oracle when it needs to parse SQL statements [usually not a performance problem, but processes parameter may need to be increased]: ...
for example 1, high values for dynamic_sampling cause more recursive SQL to be generated ;
example 2: I have seen a situation for this application of excessive hard parsing; this will drive up the process count as hard parsing will require new processes to execute parse related recursive SQL (increased the processes parameter in this case since it was a vendor app). Since your issue is related to the bounce, it could be that the app startup requires a lot of parsing.
Example 3:
“Session Leaking” Root Cause Analysis:
Problem Summary: We observed periods where many sessions being created, without a clear understanding of what part of the application is creating them and why.
RCA Approach: Since the DB doesn't persist inactive sessions, I monitored the situation by manually snapshotting v$session.
Analysis:
 I noticed a pattern where multiple sessions have the same process#.
 As per Oracle doc’s, these sessions are recursive sessions created by oracle under an originating process which needs to do recursive SQL to satisfy the query (at parse level). They go away when the process that created them is done and exits.
 If the process is long running, then they will stay around inactive until it is done.
 These recursive sessions don't count against your session limit and the inactive sessions are in an idle wait event and not consuming resources.
 The recursive session are most certainly a result of recursive SQL needed by the optimizer where optimizer stats are missing (as is the case with GTT’s) and the initialization parameter setting of 4 for optimizer_dynamic_sampling .
 The 50,000 sessions in an hour that we saw the other day is likely a result of a couple thousand select statements running (I’ve personally counted 20 recursive sessions per query, but this number can vary).
 The ADDM report showed that the impact is not much:
Finding 4: Session Connect and Disconnect
Impact is .3 [average] active sessions, 6.27% of total activity [currently on the instance].
Average Active Sessions is a measure of database load (values approaching CPU count would be considered high). Your instance can handle up to 32 active sessions, so the impact is about 1/100th of the capacity.

SQL Server Jobs Running Every 2 minutes...Bad Practice?

We have two servers, one is very slow and geographically far away. Setting up distributed queries is a headache because it does not always work (sometimes we receive a The semaphore timeout period has expired) and if the query works it can be slow.
One solution was to setup a job that populates temporary tables on the slow server with the data we need and then writes INSERT, UPDATE and DELETE statements to our server tables from the temporary tables so we have updated data on our faster servers. The job takes about one minunte and 30 seconds and are setup to run every 2 minutes. Is this bad practice and will it hurt our slower SQL Server box?
EDIT
The transactions are happening on the slow server agent (where the job is running) and using distributed queries to connect and update our fast server. If the job runs on the fast server we get that timeout error every now and then
As for the specifics, if the record exists on the faster server we perform an update, if it does not exist we insert and if the record no longer exists on the slow server we delete...I can post code when I get to a computer

How to Update and sync a Database tables at exactly same time?

I need to sync(upload first to remote DB-download to mobile device next) DB tables with remote DB from mobile device (which may insert/update/delete rows from multiple tables).
The remote DB performs other operation based on uploaded sync data.When sync continues to download data to mobile device the remote DB still performing the previous tasks and leads to sync fail. something like 'critical condition' where both 'sync and DB-operations' want access remote Databse. How to solve this issue? is it possible to do sync DB and operate on same DB at a time?
Am using Sql server 2008 DB and mobilink sync.
Edit:
Operations i do in sequence:
1.A iPhone loaded with application which uses mobilink for SYNC data.
2.SYNC means UPLOAD(from device to Remote DB)followed by DOWNLOAD(from Remote DB to device).
3.Remote DB means Consolidated DB ; device Db is Ultralite DB.
4.Remote DB has some triggers to fire when certain tables are updated.
5.An UPLOAD from device to Remote will fire triggers when sync upload finished.
6.Very next moment the UPLOAD finished DOWNLOAD to device starts.
7.Exactly same moment those DB triggers will fire.
8.Now a deadlock between DB SYNC(-DOWNLOAD) and trigger(Update queries included within) operations occur.
9.Sync fails with error saying cannot access some tables.
I did a lots of work around and Google! Came out with a simple(?!) solution for the problem.
(though the exact problem cannot be solved at this point ..i tried my best).
Keep track of all clients who does a sync(kind of user details).
Create a sql job scheduler which contains all the operations to be performed when user syncs.
Announce a "maintenance period" everyday to execute the tasks of sql job with respect to saved user/client sync details.
Here keeping track of client details every time is costlier but much needed!
Remote consolidated DB "completely-updated" only after maintenance period.
Any approaches better than this would be appreciated! all Suggestions are welcome!
My understanding of your system is following:
Mobile application sends UPDATE statement to SQL Server DB.
There is ON UPDATE trigger, that updates around 30 tables (= at least 30 UPDATE statements in the trigger + 1 main update statement)
UPDATEis executed in single transaction. This transaction ends when Trigger completes all updates.
Mobile application does not wait for UPDATE to finish and sends multiple SELECT statements to get data from database.
These SELECTstatements query same tables as the Trigger above is updating.
Blocking and deadlocks occur at some query for some user as Trigger is not completing updates before selects and keeps lock on tables.
When optimizing we are trying make it our processes less easy for computer, achieve same result in less iterations and use less resources or those resources that are more available/less overloaded.
My suggestions for your design:
Use parametrized SPs. Every time SQL Server receives any statement it creates Execution plan. For 1 UPDATE statement with a trigger DB needs at least 31 execution plan. It happens on busy Production environment for every connection every time app updates DB. It is a big waste.
How SPs would help reduce blocking?
Now you have 1 transaction for 31 queries, where locks are issued against all tables involved and held until transaction commits. With SP you'll have 31 small transaction and only 1-2 tables will be locked at a time.
Another question I would like to address: how to do asynchronous updates to your database?
There is a feature in SQL Server called Service Broker. It allows to process message queue (rows from the queue table) automatically: it monitors queue, takes messages from it and does processing you specify and deletes processes messages from the queue.
For example, you save parameters for your SPs - messages - and Service Broker executes SP with parameters.

SQL Server select gets blocked when using ADO

I have a very old Delphi app that connects to SQL Server 2008 with ADO that always performed very well. In the last 5 days the app started to hung when querying data using an standard SELECT w/joins, causing a timeout in the application.
The problem is this: if I execute the same query in SSMS the query runs just fine (no wait, no hung).
Nothing has changed in the app, in the code, or in the servers (except the latest fixes from Windows Update).
I tried this:
If I regenerate the indexes for all tables used in the SELECT, the app starts to work again (i.e. the query does not block) an minutes later it begins to block again.
I checked sp_lock and the are no exclusive locks on the tables (Only S, IS and Sch-S locks).
I tried setting the ADO cursors to READ-ONLY. This improved the performance but the query will blocks from time to time.

Resources