I am working remotely. I was notified that the Bugzilla server will be shut down for days. I am wondering is there a way I can get/export all bugs assigned to me with the attachments?
Related
We are running a small staging setup on an Oracle Database (11g Enterprise Edition Release 11.2.0.3.0). We are running jobs using both timed and event-based schedules and our issue is regarding the scheduling.
This setup has been running for about two years now, but we suddenly started experiencing problems with the event-based scheduling. The schedules just didn't start. We don't know if it is the events that didn't fire or just the schedule that didn't work, but none of the jobs that starts on these schedules started.
We tried to resolve the problem by dropping and recreating a schedule, but this resulted in zombie-jobs being created from the jobs already running, that depended on this schedule. These zombie-jobs have no session-ID and even our DBA doesn't know how to kill them. Even our workaround - creating new jobs and schedules, doesn't work - they just don't run. The DB has been restarted a couple of times, which should clear all caches, but this didn't solve anything and our zombie-jobs also survives restarting the DB.
Our DBA has created a ticket at Oracle Support, but they have so far not provided a solution - nor any workaround. They told us that this problem apparently is undocumented.
Questions:
How do we get the event-based scheduling up and running again?
How do we kill the zombie-jobs - as far as I know, there's no "decapitate"-function. :(
We have a SQL cluster in an Azure environment that experienced a failover/recovery incident about a week ago. Since shortly after that, this appears every 30 seconds in the Event Viewer on the primary database node:
Event 60605, Microsoft SQL Server Server Status Reporting
[Error] ConnectivityReportTcpPortUnknown: Could not determine sqlPort for MSSQLSERVER
I'm not 100% certain that it is related to the failover, but it seems so. I've searched and can't find anything on this particular error code. It most certainly reeks of a monitor or related event, as it's pretty consistent in its frequency.
After researching Azure logs (which not only report nothing relating to this event, but nothing about our failover event that was network connectivity related, also!)
I've disabled all third-party monitoring that we have on that node, as well.
I figured with the low response, this must be a bug. After talking with Microsoft, it turns out, in fact, that it is a bug with the Microsoft SQL Server IaaS Agent that runs on Azure VMs. Turns out that the agent handles some of the new Automatic Patching and Automatic Backup features on Azure, but unfortunately, does not support SQL installations that listen on multiple ports (as ours does).
Two oddities, this only started a week ago (and the service was updated recently), and even in Manual Startup mode, it restarts itself when it's off.
Microsoft has confirmed that this indeed is a bug.
We are using SQL Server 2012 Enterprise edition.
Normally we get hardly any blocked processes, but last weekend we experienced very unusual situation. Within 2 hours we got more "blocked process" alerts than we did in the last year together. There were a few hundred alerts within this time. Then suddenly without any interference from anyone everything went back to norm and we didn't get any blocked processes ever since. I want to prevent this situation from occurring again.
I am well aware how to find what can be causing blocking at present, but I have very little idea how to find what caused the block in the past, which is currently resolved.
I checked error logs in SQL Server Management Studio, but there is nothing there under the date when blocking occurred. There is also nothing unusual in the Windows event viewer. Where else should I check?
Could you please help?
From what you describe, I'm not too sure you will actually find the cause of the previously blocking processes if you did not actively setup tracing i.e. have your blocked process threshold set and configured with an alert to provide said trace information. The situation you described is interesting and definitely worth monitoring.
Here is an article on blocked process threshold configuration in SQL Server and a link through to Alerts configuration.
Hope this helps
This is my very fist question in this forum; so please bear with me for any mistakes or incompleteness.
We have a Web application deployed under Tomcat 6.0.20 and Oracle 10g set up and it ran perfectly fine without issues for the last one year or so. This week we have migrated to a new server environment. The ONLY thing that changed were Tomcat 6.0.35 and Oracle 11g. I am using the same odbc14.jar for database connection pooling.
While the application seems to run fine, I am seeing JVM Full Thread dumps appearing in catalina.out about every 10 minutes or so (even when there are no apparent activities at the application side).
The application performance doesn't seem to be impacted so far but I wanted to know if I should be concerned about these thread dump messages.
Both tomcat and Oracle are running under Solaris 10 (in separate physical boxes)
Any advice will be very helpful. Let me know if a thread dump snapshot will be more helpful to analyze.
I believe that this is a known problem with the combination of 11g and odbc14.jar
You should be using ojdbc6.jar - that may or may not solve your problem, but it's the first thing that I'd try before looking elsewhere.
BTW, if you're upgrading tomcat, why 6.0.35 and not 7.x - 7.0.27 is out now?
I have a couple of SQL Azure databases deployed. They all seem to work just fine at most times of the day. However, I have recently noticed that there is a consistent set of errors around the 5AM to 7AM PST time (GMT -8). Does anyone know if there are maintenance windows or anything else, server side at Azure, that would consistently cause errors during this hour? I have already checked my code to verify that there isn't anything on the client side that would cause this type of consistency in errors.
You shouldn't be seeing any type of daily window outtage. If you are, I would recommend you open up a support ticket and drive the issue through to resolution. Please also post the findings so we can all learn from it. :)