Weblogic 11g Indefinite Lockout - weblogic11g

Is there a way to enable indefinite lockout in Weblogic 11g?
I only see UserLockoutManagerMBean.LockoutDuration which is between 0 and 30 minutes.
To clarify: Indefinite lockout as in account can only be unlocked by an admin.

According to Oracle Support, there is no indefinite lockout. However, the solution proposed is to set the lockout duration to a very large value.
The maximum threshold value is 9223372036854775807 minutes which
comes around (17536635926176.287109) years.

Related

Maximum execution time for Flyway migrations

Does anyone has experience with maximum execution time of Flyway migrations
What is the maximum execition time, if set by Flyway (or this depends on dababase settings primarily)?
What will happen when this time is hit?
What if multiple migrations are in chain and one of them timeouts, what will happen?
I have been unable to find any related information in docs or any articles.
Flyway itself currently does not set a timeout or maximum execution time. The timeout is managed by the target database and the settings on your connection to it.
There is a github issue thread here on this topic if you would like a timeout to be added and would like to share your scenario with the flyway team.
What happens when you hit a timeout (or if there is a network or other failure which causes the query to disconnect) will vary depending on how you are using transactions and whether your target database supports DDL statements within a transaction.

SQL Server CPU Permanently stuck at 100%

For months we have been plagued with an issue where a database which serves two web servers has its CPU shoot up to 100% and stay there, for hours if we let it. All 6 processors. This happens every few days at different times of the day. The CPU usage is due to the sqlserver.exe.
This is not a general SQL Server performance issue ("how do I make my queries more efficient"). When there is an incident, CPU goes from its typical 20% up to 100% and stays there until a server reboot.
We are on SQL Server 2016 SP2 cumulative update 6.
We've added some logging and see that during the latest CPU incident, the number of spinlocks on OPT_IDX_STATS shot up to 5775813 spins per collision. Not sure if that's the cause or a symptom?
Before CPU 100% incident
name collisions spins spins_per_collision sleep_time backoffs
---- ---------- ----- ------------------- ---------- --------
OPT_IDX_STATS 787 200250 254.4473 0 5
LOCK_HASH 2137398 630970500 295.205 1410 52938
1 minute later
name collisions spins spins_per_collision sleep_time backoffs
---- ---------- ----- ------------------- ---------- --------
OPT_IDX_STATS 12 69309750 5775813 7 27
LOCK_HASH 17292 49187101 2844.5 47 555
We see around 40 queries running when an incident hints. They are typically instances of the same two LINQ queries. No query ever has an elapsedMS of longer than 20,000ms, so it's not a long running query that's crushing the CPU. They are expensive queries, but it seems to be a symptom of the problem not a cause - we see those queries piling up because the DB is running so slow because CPU is so high. Those same queries (along with others) are being executed all the time, including after the DB server is rebooted, and they don't cause a problem after a reboot.
The server has 36 GB of memory and we don't see usage going higher than 22%.
Some other interesting information, killing the currently running queries lets the CPU drop, but only briefly (shoots up again as the web servers send more queries). Pausing the DB to let the queries finish lets the CPU drop for as long as it's paused, but then it shoots up when the DB is resumed. Rebooting the database server always fixes the issue. Before and after the database reboot the webservers should be sending the same types of queries, which points to a problem with SQL Server - otherwise why would a reboot fix the problem?
Update: I wrote a PowerShell script that clears the plan cache if the CPU is > 95% for 45 seconds and that seems to have worked around the problem. Still don't know what the issue is though.
Copying comments to an answer as requested:
What is the memory configuration for the SQL Server? Do you have it set to correctly limit the amount of memory SQL Server will try to claim for itself? I've seen people leave it at the default, and then get into pathological situations where SQL Server claims more memory than is available, causing it and the OS to swap, cratering performance. This is always the first thing to check. There are guides out there for the best value for this particular setting for your memory, OS, and configuration. A good rule of thumb for 80% of normal configurations is take installed memory, subtract 4GB, and use that value for SQL Server.
The next thing to check is your plan caches and the like. If you have hard-coded SQL queries (not parameterized) that vary with requests, you could have a horribly polluted plan cache. Try turning the "Optimize for ad-hoc queries" option on under Advanced options. Try clearing all caches and see if that affects performance (something short of a reboot).
You can look at using Resource Governor, I've had to do it in a similar situation where I HAD to share the database with some resource hogs:
https://learn.microsoft.com/en-us/sql/relational-databases/resource-governor/resource-governor?view=sql-server-2017
It's still relevant in SQL 2016 but I didn't easily find the link.

Long elapsed_time with Azure SQL

We have been using Azure SQL successfully with low data amounts for some time. Now when we have seen moderate increase in data storage volumes we are experiencing very long query times. What is weird is that querying dm_exec_query_stats shows that the worker/cpu time for queries is very low (roughly 0 seconds) but the elapsed time is long (can be over 30 seconds).
We have upgraded our pricing tier to 100 DTU's after which our resource consumption for all categories is only < 10% of available resources. The execution plans look decent and given that cpu times are so low this shouldn't be an issue.
I have also checked wait times which has yielded no significant results. The wait times for individual queries show just a few milliseconds of wait times and the only significant wait type is SOS_WORK_DISPATCHER which doesn't ring a bell - or exist in Microsoft documentation.
Both the web application and sql server consuming the data are both in Azure West Europe and considering that we have not seen considerable IO amounts it shouldn't be a problem.
Does anyone have an idea what could be causing this or what is SOS_WORK_DISPATCHER

VDI_CLIENT_OTHER wait type in SQL Server 2016

After direct seeding on an on-prem SQL Server 2016, I still have 8 sessions (8 CPUs) permanently waiting on the VDI_CLIENT_OTHER wait type, for two days. It seems to be the result of direct seeding, as mentioned in this anwser.
Azure SQL high wait time on "VDI_CLIENT_OTHER"
And Indeed direct seeding was completed two days ago.
I will kill the sessions I guess, as they just appear to do nothing. Does anyone has more info about the wait type, or had the same issue ?
Thanks

Rogue Process Filling up Connections in Oracle Database

Last week we updated our DB Password and ever since after every db bounce the connections are getting filled up.
We have 20+ schema and connections to only one Schema gets filled up. Nothing shows up in the sessions. There can be old apps accessing our database with old password and filling up connections.
How to identify how many processes are trying to connect to DB server and how many are failed.
Every time we bounce our db servers connections go through post 1hr no one else can make new connections.
BTW: in our company, we have LOGON and LOGOFF triggers which persist the session connect and disconnect information.
It is quite possible that what you are seeing are recursive sessions created by Oracle when it needs to parse SQL statements [usually not a performance problem, but processes parameter may need to be increased]: ...
for example 1, high values for dynamic_sampling cause more recursive SQL to be generated ;
example 2: I have seen a situation for this application of excessive hard parsing; this will drive up the process count as hard parsing will require new processes to execute parse related recursive SQL (increased the processes parameter in this case since it was a vendor app). Since your issue is related to the bounce, it could be that the app startup requires a lot of parsing.
Example 3:
“Session Leaking” Root Cause Analysis:
Problem Summary: We observed periods where many sessions being created, without a clear understanding of what part of the application is creating them and why.
RCA Approach: Since the DB doesn't persist inactive sessions, I monitored the situation by manually snapshotting v$session.
Analysis:
 I noticed a pattern where multiple sessions have the same process#.
 As per Oracle doc’s, these sessions are recursive sessions created by oracle under an originating process which needs to do recursive SQL to satisfy the query (at parse level). They go away when the process that created them is done and exits.
 If the process is long running, then they will stay around inactive until it is done.
 These recursive sessions don't count against your session limit and the inactive sessions are in an idle wait event and not consuming resources.
 The recursive session are most certainly a result of recursive SQL needed by the optimizer where optimizer stats are missing (as is the case with GTT’s) and the initialization parameter setting of 4 for optimizer_dynamic_sampling .
 The 50,000 sessions in an hour that we saw the other day is likely a result of a couple thousand select statements running (I’ve personally counted 20 recursive sessions per query, but this number can vary).
 The ADDM report showed that the impact is not much:
Finding 4: Session Connect and Disconnect
Impact is .3 [average] active sessions, 6.27% of total activity [currently on the instance].
Average Active Sessions is a measure of database load (values approaching CPU count would be considered high). Your instance can handle up to 32 active sessions, so the impact is about 1/100th of the capacity.

Resources