Detect the specific script that causes DB blocked sessions - database

Folks,
I need guidance. Before scripts execution in a Database (Let it be Oracle/Postgre scripts). Is there any way to detect the specific script that causes DB blocked sessions during script execution starts?

No, it is not possible to pre-identify whether that script would cause any locking in the database. But you can understand explain plan of the query before executing it.

Related

We are using Sybase/ODBC how to deal with disconnects while running long batch SQL queries?

We are developping an application in C# that uses ODBC and the "Adaptive server enterprise" driver to extract data from a Sybase DB.
We have a long SQL batch query that create a lot of intermediate temporary tables and returns several DataTable objects to the application. We are seeing exceptions saying TABLENAME not found where TABLENAME is one of our intermediate temporary tables. When I check the status of the OdbcConnection object in the debugger it is Closed.
My question is very general. Is this the price you pay for having long-running complicated queries? Or is there a reliable way to get rid of such spurious disconnects?
Many thanks in advance!
There's a couple of ODBC timeout parameters - see SDK docs at:
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc20116.1550/html/aseodbc/CHDCGBEH.htm
Specifically CommandTimeOut and ConnectionTimeOut which you can set accordingly.
But much more likely is you're being blocked or similar when the process is running - maybe ask your DBA to check your query plan for the various steps in your batch and look for specific problem areas such as tablescans etc which could be masking your timeout issue.

Lock an SSIS package from multiple simultaneous executions

I have an SSIS package(Package1.dtsx) that been deployed to SSISDB. currently I scheduled the package with some parameters in sql server agent.
how do I lock the package(Package1.dtsx) if someone try to attempt running it in another sql server agent job with different parameters.
You can do this yourself by adding a flag and having your package check this flag before processing. Either quit out, loop until flag is clear or some other logic.
I personally have only ever had one agent per package and the agent handles the multiple execution scenarios.
Locking a package to prevent it from multiple executions is not possible. Think of it as a file. There is no way to lock a file from a user who has the rights to use it.
You can either create user groups/roles on SQL Server to segregate the execution depending on your needs/usage factors. To me, there is no straight forward way of locking a file from multiple executions. Sorry!

red herring error "The user does not have permission to perform this action"

When running a stored procedure, we're getting the error 297
"The user does not have permission to perform this action"
This occurs during times of heavy load (regularly, when a trim job is running concurrently). The error clears up when the service accessing SQL Server is restarted (and very likely the trim job is finished as well), so it's obviously not a real permissions problems. The error is reported on a line of a stored procedure which access a function, which in turn accesses dynamic management views.
What kind of situations could cause an error like this, when it's not really a permissions problem?
Might potentially turning on trace flag 4616 fix this, as per this article? I'd like to be able to just try it, but need more info. Also, I'm baffled by the fact that this is an intermittent problem, only happening under periods of high activity.
I was trying to reproduce this same error in other situations (that were also not real permissions problems), and I found that when running this on SQL Server 2005 I do get the permissions problem:
select * from sys.dm_db_index_physical_stats(66,null,null, null, null)
(66 is an invalid DBID.)
However, we're not using dm_db_index_physical_stats with an incorrect DBID. We ARE using dm_tran_session_transactions and dm_tran_active_transactions, but they don't accept parameters so I can't get the error to happen with them. But I was thinking perhaps that the issue is linked.
Thanks for any insights.
Would it be related to concurrency issues?
For example, the same data being processed or a global temp table being accessed? If so, you may consider sp_getapplock
And does each connection use different credentials with a different set of permissions? Do all users have GRANT VIEW SERVER STATE TO xxx?
Finally, and related to both ideas above, do you use EXECUTE AS anywhere that may not be reverted etc?
Completely random idea: I've seen this before but only when I've omitted a GO between the end of the stored proc definition and the following GRANT statement. So the SP tried to set it's own permissions. Is it possible that a timeout or concurrency issue causes some code to run that wouldn't normally?
If this occurs only during periods of heavy activity maybe you can run Profiler and watch for what locks are being held.
Also is this always being run the same way? For example is it run as a SQL Agent Job? or are you sometimes running manually and sometimes running it as a job. My thinking is maybe it is running as diff. users at different times.
Maybe also take a look at this Blog Post
Thanks everyone for your input. What I did (which looks like it's fixed the problem for now), is alter the daily trim job. It now waits substantially longer between deletes, and also deletes a much smaller chunk of records at a time.
I'll update this later on with more info as I get it.
Thanks again.

Can my oracle still be used by end user when a crontab job update the database?

I have a crontab job run at 00:00 to update my website database every night;
To prevent the job from brreaking end user's operation on website, should I stop my website during the job is running? Is there any better alternate course? Thanks in advance!
Oracle, like other DBMSs, allows concurrent access to the data, even in case of concurrent readings and writings.
So yes, the users will still be able to access the database during the update job. Depending on what the update job might do and its duration, there might be interferences, but I don't know the details.
Normally, you should try to define the update job in such a way to make sure that there are no interferences with user activities, if possible, instead of shutting down the site while updating.
Try it and if you find you do have interference and the job is very long running check whether the design allows you to COMMIT more often. Otherwise let us know details such as: what the job is doing, how many rows you are likely to insert or update, and which version of Oracle.

OpenQuery to DB2/AS400 from SQL Server 2000 causing locks

Every morning we have a process that issues numerous queries (~10000) to DB2 on an AS400/iSeries/i6 (whatever IBM calls it nowadays), in the last 2 months, the operators have been complaining that our query locks a couple of files preventing them from completing their nightly processing. The queries are very simplisitic, e.g
Select [FieldName] from OpenQuery('<LinkedServerName>', 'Select [FieldName] from [LibraryName].[FieldName] where [SomeField]=[SomeParameter]')
I am not an expert on the iSeries side of the house and was wondering if anyone had any insight on lock escalation from an AS400/Db2 perspective. The ID that is causing the lock has been confirmed to be the ID we registered our linked server as and we know its most likely us because the [Library] and [FileName] are consistent with the query we are issuing.
This has just started happening recently. Is it possible that our select statements which are causing the AS400 to escalate locks? The problem is they are not being released without manual intervention.
Try adding "FOR READ ONLY" to the query then it won't lock records as you retrieve them.
Writes to the files on the AS/400 side from an RPG/COBOL/JPL job program will cause a file lock (by default I think). The job will be unable to get this lock when you are reading. The solution we used was ... don't read the files when jobs are running. We created a big schedule sheet in excel and put all the sql servers' and as/400's jobs on it in times slots w/ color coding for importance and server. That way no conflicts or out of date extract files either.
You might have Commitment Control causing a lock for a Repeatable Read. Check the SQL Server ODBC connection associated with <linkedServerName> to change the commitment control.

Resources