We are running a small staging setup on an Oracle Database (11g Enterprise Edition Release 11.2.0.3.0). We are running jobs using both timed and event-based schedules and our issue is regarding the scheduling.
This setup has been running for about two years now, but we suddenly started experiencing problems with the event-based scheduling. The schedules just didn't start. We don't know if it is the events that didn't fire or just the schedule that didn't work, but none of the jobs that starts on these schedules started.
We tried to resolve the problem by dropping and recreating a schedule, but this resulted in zombie-jobs being created from the jobs already running, that depended on this schedule. These zombie-jobs have no session-ID and even our DBA doesn't know how to kill them. Even our workaround - creating new jobs and schedules, doesn't work - they just don't run. The DB has been restarted a couple of times, which should clear all caches, but this didn't solve anything and our zombie-jobs also survives restarting the DB.
Our DBA has created a ticket at Oracle Support, but they have so far not provided a solution - nor any workaround. They told us that this problem apparently is undocumented.
Questions:
How do we get the event-based scheduling up and running again?
How do we kill the zombie-jobs - as far as I know, there's no "decapitate"-function. :(
Related
I have several publications in SQL Server 2016 in my test environment, when my distribution clean up job runs it running forever without deleting anything.
I did some digging around and discovered that this job is actually blocked by one repl-logreader of the server, however, when I checked the replication monitor for all the publications of the server, all of them showing good status without any latency. Their undistributed commands are '0'.
What else that I should look at to solve this issue?
I have googled and read many questions/answers, but only one question has ever sounded exactly the same and it did not have an answer.
The situation:
My group has several SQL Servers that are running SQL Server 2017. They are configured virtually identically.
These servers are build boxes, meaning they pull data from a data ware house, or an extract file, run some ETL processing and then push to a prod box. SSIS packages are deployed on the box where the DB resides.
Just over a month ago (with no updates having occurred), one of these servers started having an issue where all the jobs that ran an SSIS package would "hang" on the step that ran the package. Any other step runs fine. But a job step that runs a package (all jobs do this), will not even start the package. The package shows no indication in the executions that anything has even tried to start it.
If the user executes the deployed package it will run successfully.
The only thing that will "fix" the issue is restarting the agent service.
I created a simple job to run a simple package every 5 mins. It had been running for about a week, the last time it ran was 4/11/2021 at 2:40am, the 2:45 run hung. I could find nothing in the event logs that occurred at that time. The server was rebooted as a normal scheduled process at 3:15 and was online by 3:25 because that is the next time it tried to run and it again just hung. So even a server reboot did not fix the issue.
I am at my wits end, since there is no error (the job hangs and the package does not even start) there is no logging that I can find that is showing any issues, I am at a loss as to what might cause this.
Thanks in advance.
Take a look at the SSISDB catalog database on each/all the servers involved. Has it grown exponentially and needs the history etc. cleared down or settings changed? How big are the transaction logs for those databases etc.?
I have a question regarding implications of using waitfor delay in an execute sql task on an ssis package. Here's what's going on: I have source data tables that due to the amount of data and linked server connection yada yada they are dropped and created every night. Before my package the utilizes this data runs I have a loop for container. In this container I have an execute sql task that checks to see my source tables exist and if they do not, it sends me and email via email task, then goes to an execute sql task that has a waitfor delay of 30 mins (before looping and checking for source tables again). Now I thought I was pretty slick with this design but others on my team are concerned because they do not know enough about this waitfor task. They are concerned that my package could possibly interfere with theirs, or slow down server, use resources etc....
From my google searches I didn't see anything that actually seemed like it would cause issues. Can anyone here speak to the implications of using this task?
SQL WAITFOR is ideal for this requirement IMO - I've been using it in production SSIS packages for years with no issues. You can monitor it via SSMS Activity Monitor and see that it doesnt consume any resources.
We've had a SQL Azure cloudapp/database in production for a long time and while its performance has been a little volatile, over the last few days it has suddenly dropped drastically. Our application is unresponsive because SQL queries and stored procedures that used to take 5-10 seconds are now taking 90 seconds or more.
What are the things I should check, given that we already do regular index rebuilds/reorgs, clear down large tables when we're finished, etc.
We're still on the "Web" service tier and are planning to move soon to the newer S2 perhaps but we need to tackle this issue.
1) How many active connections does your SQL Azure DB have during slow times? Things get wierd once you get into 150+ range on a shared plan.
If you have a ton of connections open, that means you're not properly clearing them in your app somewhere.
2) Does your DB have any blocking queries? DBs with alot of blocking (deadlocking) queries may behave much slower, if you need access to locked resources
3) You should really consider switching to a dedicated SQL Azure plan. It is very quick to do and no action is required on the app-dev side. http://azure.microsoft.com/blog/2014/07/08/azure-update-sql-database-easy-upgrade-to-new-service-tiers-performance-improvements-pitr-for-basic-and-automated-export-for-all-service-tiers/
4) If neither helps, contact support. This could be an issue on their end
5) Once immediate problems are resolved, consider active monitoring of your SQL Azure db's (link in my profile signature)
http://www.developer.com/services/how-to-identify-performance-bottlenecks-on-azure-sql-database.html
You could also have a device in your network that is slowing down the performance. You might want to run some network tests to see if the problem is internal or external. For instance, someone might have changed some firewall or security settings on a rollout and messed it up a bit or a device might be ready to fail.
I've written a .Net application which has an SQL Server 2008 R2 database with relatively small number of tables, but in some tables there might be some 100,000,000 records! For improving performance of SELECTs, I've created necessary indexes and it works well. But, as everyone knows, indexes need to be rebuilt when they are fragmented.
We have installed an SQL Server 2008 R2 Express on one of customer PCs plus my Winforms application. Three more PCs connect to this database over regular LAN, and everything seems fine.
Now, the problem is that, I want to rebuild indexes, for example every time a user starts using my program on ANY of the machines. Well, I can execute several ALTER INDEXes, but as stated in MS docs, OFFLINE indexing will lock the tables for period of indexing. Which means other users will lose access to tables when a user starts the program! I know there is an ONLINE option, but it doesn't work in Express edition of SQL Server.
In other environments with a real server running all the time, I would create an Agent Job which rebuilt indexes over night.
How can I solve this problem?
Without a normal 24/7 server running, it's difficult to do such maintenance automatically without disturbing users. I don't think putting that job at the application startup is a good idea, as it can really start many times together without a real reason, and also slows down startup significantly if tables are big, in addition to keep everyone else out as you say.
I would opt for 2 choices:
Setup a job on the "server" to do the rebuild on either SQL Server startup or computer startup. It will slow down the initialization of that PC when the user first power it on, but once done, it should work OK, and most likely with similar results to the nightly job.
Add an option in the application to launch the reindexing job manually when the user wants to do it, warning that it will take some time and during the process anyone else cannot use it. While this provides maximum flexibility, it relies on the user doing so when they start noting delays.