Hi I am using C on Solaris. I have a process which connects to a database. I need to determine if the same process is running the same query for a long time (say 15 seconds) then I need to disconnect and re establish the database connections.
I know I can check for the same
processes with the process id's. But I
am more concerned about in knowing how
to determine if a same process is
running the same query?
Any help is deeply appreciable.
"I need to determine if the same process is running the same query for a long time (say 15 seconds) then I need to disconnect and re establish the database connections."
Not sure what problem you are addressing.
If you drop the connection, then the database session may persist for some time. Potentially still holding locks too.
Also, if a PL/SQL block is looping and running 1000 queries each taking a tenth of a second, should that be counted as 1 statement for your abort logic ?
You can look at V$SESSION and the SQL_ID or SQL_HASH_VALUE. Then check again after fifteen seconds and see if it has changed. You can also look at v$sessstat / v$statname for things like "execute count" or "user calls" to determine whether it is the same SQL running for a long time or multiple SQL calls.
if you start your queries straight from your client, you can check v$session.last_call_et. This column shows how many seconds ago the last server call started for this session. A server call in this case is the execute query. This won't work is your client starts a block of pl/sql and happens to start the querie[s] from there. In that case last_call_et will point to the start of the pl/sql block, since that was the last thing started by your session.
This could be the easiest.
Does it help?
Ronald - http://ronr.blogspot.com
My advise is that you fix the root cause instead of treating the symptoms.
Related
I'm executing queries to SQL Server from an application. Sometimes one of those queries lasts very long. Too long actually, it usually indicates that it will eventually fail. I would like to specify a maximum duration, after which the query should just fail.
Is there a way to specify a command timeout in T-SQL?
I know a (connection and) command timeout can be set in the connection string. But in this application I cannot control the connection string. And even if I could it should be longer for the other queries.
As far as I know you cannot limit query time unless specified in the connection string (which you can't do) or if the query is executed over a linked server.
Your DBA can set the timeout to a linked server as well as to queries but a direct query does not let you do so yourself. The bigger question I would have is why does the query fail? Is there a preset timeout already on the server (hopefully), are you running out of memory and paging, or any of a million other reasons. Do you have a DBA? Because if one of my servers was being hammered by such bad code, I would be contacting the person who was executing it. If he hasn't, you should reach out and ask for help determining the failure reason.
If the unexpectedly long duration happens to be due to a (local) lock, there is a way to break out of it. Set the lock timeout before running the query:
SET LOCK_TIMEOUT 600000 -- Wait 10 minutes max to get the lock.
Do not forget to set it back afterwards to prevent subsequent queries on the connection from timing out:
SET LOCK_TIMEOUT -1 -- Wait indefinitely again.
I have a Perl code which connects with database and scan data from different different table. I face a problem if I lose my connection: it roll back all the transaction. How could I make the Perl script resume the connection and start the process from where the interruption took place? Can I use the Perl to resume the connection or any thing other technique to start the process from where the interruption took place if so could any one guide me with the steps please.
It is actually required because of we have lots of data and takes 1 week to scan all the data and insert in specific table, in between if we run database offline backup it disconnect all the connection and whatever transaction happens it roll back and need to run once again from the beginning.
We can commit the transaction whatever done but challenge is how we can start process from where the interruption took place so we don't require to run from the beginning.
Relying on a DB connection to be consistently open for over a day is the wrong approach.
A possible solution involves:
1) Connect to the DB to create a DB Handle. Use an infinite loop and sleep to wait until you have a good db handle. Put this into a subroutine.
2) Put the individual requests for individual tables in a data structure like an array
Execute separate Queries in separate statements in a loop (.
Check if the dbh is stale before each individual request. Clean and recreate the handle if necessary. Use the subroutine from 1)
Handle breakdowns during a request in EVAL blocks using the "redo" statement to make sure no statement gets skipped.
3) Keep the data between requests either in memory or any non-SQL storage like a Key/Value Store ( Redis, )
4) Compute whatever needs computation
5) When you have all data for your commit transaction, do.
This solution assumes you don't care about changes between reading and committing back. If you do, you need to LOCK your affected tables first. You probably don't want to lock a table for a week though.
I'm trying to delete one single record from the database.
Code is very simple:
SELECT * FROM database.tablename
WHERE SerialNbr = x
This gives me the one record I'm looking for. It has that SerialNbr plus a number ids that are foreign keys to other tables. I took care of all foreign key constraints to where the next line of the code will start running.
After that the code is followed by:
DELETE FROM tablename
WHERE SerialNbr = x
This should be a relatively simple and quick query I would think. However it has now run for 30 minutes with no results. It isn't yelling about any problems with foreign keys or anything like that, it just is taking a very very long time to process. Is there anything I can do to speed up this process? Or am I just stuck waiting? It seems something is wrong that deleting one single record would take this long.
I am using Microsoft SQL Server 2008.
It's not taking a long time to delete the row, it's waiting in line for its turn to access the table. This is called blocking, and is a fundamental part of how databases work. Essentially, you can't delete that row if someone else has a lock on it - they may be reading it and want to be sure it doesn't change (or disappear) before they're done, or they may be trying to update it (to an unsatisfactory end, of course, if you wait it out, since once they commit your delete will remove it anyway).
Check the SPID for the window where you're running the query. If you have to, stop the current instance of the query, then run this:
SELECT ##SPID;
Make note of that number, then try to run the DELETE again. While it's sitting there taking forever, check for a blocker in a different query window:
SELECT blocking_session_id FROM sys.dm_exec_requests WHERE session_id = <that spid>;
Take the number there, and issue something like:
DBCC INPUTBUFFER(<the blocking session id>);
This should give you some idea about what the blocker is doing (you can get other information from sys.dm_exec_sessions etc). From there you can decide what you want to do about it - issue KILL <the spid>;, wait it out, go ask the person what they're doing, etc.
You may need to repeat this process multiple times, e.g. sometimes a blocking chain can be several sessions deep.
What I think is happening is there is some kind of performance hunt in the database or that particular table you can see that by simply run sp_who2 and kill the SP's. Be careful on running the kill sp because it might not be your query.
Delete From Database.Tablename
where SerialNbr=x
I have a database where data is processed in some kind of batches, where each batch may contain even a million records. I am processing data in a console application, and when I'm done with a batch, I mark it as Done (to avoid reading it again in case it does not get deleted), delete it and move on to a next batch.
I have the following simple stored procedure which deletes processed "batches" of data
CREATE PROCEDURE [dbo].[DeleteBatch]
(
#BatchId bigint
)
AS
SET XACT_ABORT ON
BEGIN TRANSACTION
DELETE FROM table1 WHERE BatchId = #BatchId
DELETE FROM table2 WHERE BatchId = #BatchId
DELETE FROM table3 WHERE BatchId = #BatchId
COMMIT
RETURN ##Error
I am using NHibernate with command timeout value 10 minutes, and the DeleteBatch procedure call times out occasionally.
Actually I don't want to wait for DeleteBatch to complete. I already have marked the batch as Done, so I want to go processing a next batch or maybe even exit my console application, if there are no more pending batches.
I am using Microsoft SQL Express 2012.
Is there any simple solution to tell the SQL server - "launch DeleteBatch and run it asynchronously even if I disconnect, and I don't even need the result of the procedure"?
It would also be great if I could set a lower processing priority for DeleteBatch because other queries are more important than DeleteBatch.
I dont know much about NHibernate. But if you were or can use ADO.NET in this scenario then you can implement asynchronous database operations easliy using the SqlCommand.BeginExecuteNonQuery Method in C#. This method starts the process of asynchronously executing a Transact-SQL statement or stored procedure that does not return rows, so that other tasks can run concurrently while the statement is executing.
EDIT: If you really want to exit from your console app before the db operation ends then you will have to manually create threads in your code and perform the db operation in those threads. Now when you close your console app these threads would still be alive because Threads created using System.Thread.Thread are foreground threads by default. But having said that it is also important to consider how many threads you will create. In your case you would have to assign 1 thread for each batch. If number of batches is very large then large number of threads would need to be created which would inturn eat a large amount of your CPU resources and would even freeze your OS for a long time.
Another simple solution I could suggest is to insert the BatchIds into some database table. Create an INSERT TRIGGER on that table. This trigger would then call a stored proc with BatchId as its parameter and would perform the required tasks.
Hope it helps.
What if your console application were, instead of trying to delete the batch, just write the batch id into a "BatchIdsToDelete" table. Then, you could use an agent job running every x minutes/seconds or whatever, to delete the top x percent records for a given batch id, and maybe sleeping a little before tackling the next x percent.
Maybe worth having a look at that?
Look at this article which explains how to do reliable asynchronous procedure execution, code included. IS based on Service Broker.
the problem with trying to use .NEt async features (like BeginExecute, or task etc) is that the call is unreliable: if the process exits before the procedure completes the execution is canceled in the server as the session is disconnected.
But you need to also look at the task itself, why is the deletion taking +10 minutes? is it blocked by contention? are you missing indexes on BatchId? Use the Performance Troubleshooting Flowchart.
Late to the party, but if someone else has this problem use SQLCMD. With express you are limited in the number of users (I think 2, but it may have changed since I the last time I did much with express). You can have sqlcmd, run queries, stored procedures ...
And you can kick off the sqlcmd with Windows Scheduler. A script, an outlook rule ...
I used it to manage like 3 or 4 thousand SQL Server Express instances, with their nightly maintenance scheduled with the Windows Scheduler.
You could also create and run a PowerShell script, it's more versatile and probably a more widely used than sqlcmd.
I needed a same thing..
After searching for long time I found the solution
Its d easiest way
SqlConnection connection = new SqlConnection();
connection.ConnectionString = "your connection string";
SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(connection.ConnectionString);
builder.AsynchronousProcessing = true;
SqlConnection newSqlConn = new SqlConnection(builder.ConnectionString);
newSqlConn.Open();
SqlCommand cmd = new SqlCommand(storeProcedureName, newSqlConn);
cmd.CommandType = CommandType.StoredProcedure;
cmd.BeginExecuteNonQuery(null, null);
Ideally SQLConnection object should take an optional parameter / property, URL of a web service, be that WCF or WebApi, or something yet to be named, and if the user wishes to, notify user of execution advance and / or completion status by calling this URL with well known message.
Theoretically DBConnection is extensible object one is free to implement. However, it will take some review of what really can be and needs to be done, before this approach can be said feasible.
I have a server application, and a database. Multiple instances of the server can run at the same time, but all data comes from the same database (on some servers it is postgresql, in other cases ms sql server).
In my application, there is a process that is performed which can take hours. I need to ensure that this process is only executed one at a time. If one server is processing, no other server instance can process until the first one has completed.
The process depends on one table (let's call it 'ProcessTable'). What I do is, before any server starts the hour-long process, I set a boolean flag in the ProcessTable which indicates that this record is 'locked' and is being processed (not all records in this table are processed / locked, so I need to specifically mark each record which is needed by the process). So when the next server instance comes along while the previous instance is still processing, it sees the boolean flags and throws an exception.
The problem is, that 2 server instances might both be activated at nearly the same time, and when both check the ProcessTable, there may not be any flags set, but both servers are actually in the process of 'setting' the flags but since the transaction hasn't yet commited for either process, neither process will see the locking done by the other process. This is because the locking mechanism itself may take a few seconds, so there is that window of opportunity where 2 servers might still be able to process at the same time.
It appears that what I need is a single record in my 'Settings' table which should store a boolean flag called 'LockInProgress'. So before even a server can lock the needed records in the ProcessTable, it first must make sure that it has full rights to do the locking by checking the 'LockInProgress' column in the Settings table.
So my question is, how do I prevent two servers from both modifying that LockInProgress column in the settings table, at the same time... or am I going about this in the wrong manner?
Please note that I need to support both postgresql and ms sql server as some servers use one database, and some servers use the other.
Thanks in advance...
How about obtaining a lock on the record first and then update the record to show "locked". This would avoid the 2nd instance to get a lock successfully and thereby the update of record fails.
The point is to make sure the lock and update as one atomic step.
Make a stored procedure that hands out the lock, and run it under 'serializable' isolation. This will guarantee that one and only one process can get at the resource at any given time.
Note that this means that the second process trying to get at the lock will block until the first process releases it. Also, if you have to get multiple locks in this manner, make sure that the design of the process guarantees that the locks will be acquired and released in the same order. This will avoid deadlock situations where two processes hold resources while waiting for each other to release locks.
Unless you can't deal with your other processes blocking this would probably be easier to implement and more robust than attempting to implement 'test and set' semantics.
I've been thinking about this, and I think this is the simplest way of doing things; I just execute a command like this:
update settings set settingsValue = '333' where settingsKey = 'ProcessLock' and settingsValue = '0'
'333' would be a unique value which each server process gets (based on date/time, server name, + random value etc).
If no other process has locked the table, then the settingsValue would be = to 0, and that statement would adjust the settingsValue.
If another process has already locked the table, then that statement becomes a no-op, and nothing get's modified.
I then immediately commit the transaction.
Finally, I requery the table for the settingsValue, and if it is the correct value, then our lock succeeded and we continue on, otherwise an exception is thrown, etc. When we're done with the lock, we reset the value back down to 0.
Since I'm using SERIALIZATION transaction mode, I can't see this causing any issues... please correct me if I'm wrong.