How to kill a long running query on Netezza when you have no admin/root/sudo access - netezza

I ran a select query on Netezza where I do not have admin rights and it has been running for last 2 hours now. How do I kill it? I thought of dropping the session, but it says I must have Abort rights.

That's a tough one. You basically ask how to hack the system :)
Only thought that comes to mind: look in the _v_session_detail and locate the ip address and process ID of the running query. Then (have someone) kill that process or disconnect the machine from the network for a minute or so.

1) First identify session:
select qs_sessionid from _v_qrystat where qs_tstart >='date';
2) You will find the qs_sessionid, then do the following:
drop session
<qs_sessionid>;

Related

Queries work only WITH (NOLOCK)

I have found similar problem here, but the answer is not explained very well so I need your help. After I exec the sp_who2 there are more then 70 records in the result. Some of them are started by my PC, but some of them are started by someone else because the table is located on a server that is used by multiple people in the company.
In the column called COMMAND in the result table, there is one record with SELECT INTO statement written in it and my PC is the host. Could that be the one that is causing the problem and how should I kill it?
Also, the status of this command is RUNNABLE.
You can kill it by calling kill [132] where [132] would be the SPID of the process. This may be left as a result of a transaction you have not yet committed, which you could find by running DBCC OPENTRAN;

Delete Query for one record going super slow

I'm trying to delete one single record from the database.
Code is very simple:
SELECT * FROM database.tablename
WHERE SerialNbr = x
This gives me the one record I'm looking for. It has that SerialNbr plus a number ids that are foreign keys to other tables. I took care of all foreign key constraints to where the next line of the code will start running.
After that the code is followed by:
DELETE FROM tablename
WHERE SerialNbr = x
This should be a relatively simple and quick query I would think. However it has now run for 30 minutes with no results. It isn't yelling about any problems with foreign keys or anything like that, it just is taking a very very long time to process. Is there anything I can do to speed up this process? Or am I just stuck waiting? It seems something is wrong that deleting one single record would take this long.
I am using Microsoft SQL Server 2008.
It's not taking a long time to delete the row, it's waiting in line for its turn to access the table. This is called blocking, and is a fundamental part of how databases work. Essentially, you can't delete that row if someone else has a lock on it - they may be reading it and want to be sure it doesn't change (or disappear) before they're done, or they may be trying to update it (to an unsatisfactory end, of course, if you wait it out, since once they commit your delete will remove it anyway).
Check the SPID for the window where you're running the query. If you have to, stop the current instance of the query, then run this:
SELECT ##SPID;
Make note of that number, then try to run the DELETE again. While it's sitting there taking forever, check for a blocker in a different query window:
SELECT blocking_session_id FROM sys.dm_exec_requests WHERE session_id = <that spid>;
Take the number there, and issue something like:
DBCC INPUTBUFFER(<the blocking session id>);
This should give you some idea about what the blocker is doing (you can get other information from sys.dm_exec_sessions etc). From there you can decide what you want to do about it - issue KILL <the spid>;, wait it out, go ask the person what they're doing, etc.
You may need to repeat this process multiple times, e.g. sometimes a blocking chain can be several sessions deep.
What I think is happening is there is some kind of performance hunt in the database or that particular table you can see that by simply run sp_who2 and kill the SP's. Be careful on running the kill sp because it might not be your query.
Delete From Database.Tablename
where SerialNbr=x

Good way to call multiple SQL Server Agent jobs sequentially from one main job?

I've got several SQL Server Agent jobs that should run sequentially. To keep a nice overview of the jobs that should execute I have created a main job that calls the other jobs with a call to EXEC msdb.dbo.sp_start_job N'TEST1'. The sp_start_job finishes instantly (Job Step 1), but then I want my main job to wait until job TEST1 has finished before calling the next job.
So I have written this small script that starts executing right after the job is called (Job Step 2), and forces the main job to wait until the sub job has finished:
WHILE 1 = 1
BEGIN
WAITFOR DELAY '00:05:00.000';
SELECT *
INTO #jobs
FROM OPENROWSET('SQLNCLI', 'Server=TESTSERVER;Trusted_Connection=yes;',
'EXEC msdb.dbo.sp_help_job #job_name = N''TEST1'',
#execution_status = 0, #job_aspect = N''JOB''');
IF NOT (EXISTS (SELECT top 1 * FROM #jobs))
BEGIN
BREAK
END;
DROP TABLE #jobs;
END;
This works well enough. But I got the feeling smarter and/or safer (WHILE 1 = 1?) solutions should be possible.
I'm curious about the following things, hope you can provide me with some insights:
What are the problems with this approach?
Can you suggest a better way to do this?
(I posted this question at dba.stackexchange.com as well, to profit from the less-programming-more-dba'ing point of view too.)
If you choose to poll a table, then you'd need to look at msdb.dbo.sysjobhistory and wait until the run_status is not 4. Still gonna be icky though.
Perhaps a different approach would be for the last step of the jobs, fail or success, to make an entry back on the "master" job server that the process has completed and then you simply look locally. Might also make tracking down "what the heck happened" easier by consolidating starts and stops at a centralized job server.
A third and much more robust approach would be to use something like Service Broker to handle communicating and signaling between processes. That'll require much more setup but it'd be the most mechanism for communicating between processes.
No problem with the approach. I was doing somewhat like your requirement only and i used sysjobhistory table from msdb to see the run status because of some other reasons.
Coming back to your question, Please refer msdb.dbo.sp_start_job stored procedure using the same approach and its been used by one default Microsoft BizTalk job 'MessageBox_Message_ManageRefCountLog_BizTalkMsgBoxDb' to call another dependent default biztalk job 'MessageBox_Message_Cleanup_BizTalkMsgBoxDb'. Even there is one stored procedure in BizTalk messagebox to check the status of job. Please refer 'int_IsAgentJobRunning' in BizTalk messagebox.

Terminate a long running process

Hi I am using C on Solaris. I have a process which connects to a database. I need to determine if the same process is running the same query for a long time (say 15 seconds) then I need to disconnect and re establish the database connections.
I know I can check for the same
processes with the process id's. But I
am more concerned about in knowing how
to determine if a same process is
running the same query?
Any help is deeply appreciable.
"I need to determine if the same process is running the same query for a long time (say 15 seconds) then I need to disconnect and re establish the database connections."
Not sure what problem you are addressing.
If you drop the connection, then the database session may persist for some time. Potentially still holding locks too.
Also, if a PL/SQL block is looping and running 1000 queries each taking a tenth of a second, should that be counted as 1 statement for your abort logic ?
You can look at V$SESSION and the SQL_ID or SQL_HASH_VALUE. Then check again after fifteen seconds and see if it has changed. You can also look at v$sessstat / v$statname for things like "execute count" or "user calls" to determine whether it is the same SQL running for a long time or multiple SQL calls.
if you start your queries straight from your client, you can check v$session.last_call_et. This column shows how many seconds ago the last server call started for this session. A server call in this case is the execute query. This won't work is your client starts a block of pl/sql and happens to start the querie[s] from there. In that case last_call_et will point to the start of the pl/sql block, since that was the last thing started by your session.
This could be the easiest.
Does it help?
Ronald - http://ronr.blogspot.com
My advise is that you fix the root cause instead of treating the symptoms.

How to edit sessions parameters on Oracle 10g XE?

default is 49
how to edit to higher?
You will need to issue the following command (connected as a user that has alter system privileges, sys will do it)
alter system set sessions=numberofsessions scope=spfile;
Have you been getting an ORA-12516 or ORA-12520 error?
If so it's probably a good idea to increase the number of processes too
alter system set processes=numberofprocesses scope=spfile;
IIRC you'll need to bounce the database after issuing these commands.
This link http://www.oracle.com/technology/tech/php/pdf/underground-php-oracle-manual.pdf has some good information about configuring XE.
I consulted it when I ran into similar issues using XE.
You can check connection limits in order to fine tune the session/process limits:
http://zhefeng.wordpress.com/2008/09/24/ora-12516-error-tnslistener-could-not-find-available-handler-with-matching-protocol-stack/?
Step1: take a look at the processes limition.
select * from gv$resource_limit;
Step2: increase the parameter from 150 (default) to 300 (or any other desired number)
sql>alter system set processes=300 scope=spfile;
Step3: reboot the database to let parameter taking effect.
ps: check link for more info.

Resources