Oracle 11g max login fail attempts workaround - database

My problem with database starts with situation where I cant really modify anything in database. My project specialist has limited time to help me. Here is the thing:
My user in Oracle database has older schema than actual production one. My section is working on stable and older version. After every release we are keep getting this issue, that something is set (maybe on Jenkins, maybe not) automatically to update our database to version, which we dont want. We tried to resolve it by changing password to user, but it produce new issue. Automat is trying to log in and when it gets wrong pass error, it is trying again. Oracle 11g has this limit 10 failed login attempts, after which it is locking the whole user account, which we use to connect do db by our application server.
We can not investigate this by turning on auditing failed logins, because it takes place on database space and our db-guy has not allowed us to do it, because if we exceed the space limit (which is about 11GB) the whole database will be dead (our project is not as important to do it). Another thing is that person who probably set the scripts which are our problem doesnt work anymore here.
Our workaround was to manually unlock account to get the connection by application server, and then wait a few secs to get locked again (but the connection of app server was stable). It is stupid, you must admit and the problem is when the connection drops by any reason - app server will not get it automatically, we have to do it manually which is not a solution. I have reconsidered it all again, my db-guy has no time to help me, I have no tools and access rights to investigate where this script or whatever other problem causing thing is beeing executed, so I started to thinking: what if we set limit of failed login attempts to unlimited? Will this decrease the performance of database? Will this generate any special new problems? Maybe the solution would be change the PASSWORD_LOCK_TIME to small value? I am asking you to some arguments that I could provide to my db-guy to convince him to use this new workarounds so I can start working again with code and not this database problems.

Related

SSIS Package Error single UPDATE in a execute SQL task

I am trouble shooting an error in a package.
Update MYTABLE for MYCOLUMN (REF to task name):Error: Executing the query "..." failed with the following error: "Invalid column name 'MYCOLUMN'.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
I have verified the table and column exists, the length of the field is way excessive than what it needs that is 14 where it is declared as varchar(250).
I have verified the script works on the server in SSMS outside of the context of the package.
I have verified the connection and database in the package is as I expect.
Is there away to verify on the server. I did try to look at the Connection Managers tab on the package configuration itself i.e. in the Integration Services Catalogs->SSISDB->solutionfolder->..->package.dtsx->Configure context menu but it is empty.
Any ideas on how to troubleshoot?
Just to add more context the package contains 27 other tasks, 9 tasks in a row linked to this task but all set to on completion, all seem to be doing stuff independent of the other. 1 task is a loop doing stuff and the rest are single independent tasks. So I don't know at this stage if it is a cascading connection issue perhaps however; I am just reading what the log says.
I kicked off the package at 9:54am, the timestamp on the error log says 11:45am so nearly 2 hours into running is this log reported.
I would suggest the below things to troubleshoot the issue.
I would suggest you to just have this task and disable all other
tasks to troubleshoot the issue. So that you can focus on this issue
specifically. That will tell you whether connection is working fine
without issues.
I would suggest you to edit the task and see whether parameters are
set properly. Different providers have different way of setting
parameters. Again check whether parameters are proper. Execute SQL
Task
one more thing, may be you are pointing the package to different
connection than the one you used for SSMS. So, it is working in SSMS
and in the connection being used in the package is not having schema
changes yet done.
I finally figure it out before I read the previous offered suggestion so will give some credit if I can! FYI: We have a lot of dev servers. I clicked on the overview hyperlink in the All Execution logs and it said another server. Also I found the connection on the job calling the package not the package itself so I have learnt something today. Anyhow the job said one server but the overview said another so I again I was back to square one scratching my head.
Then I decided to open the connection manager on the job and select the field and make no change rather then cancelling I clicked ok not thinking about it and noticed the field changed to bold face. So I am assuming if you make a manual change on the server in SSMS to anything it shows up in bold which is kind of useful. So I can only assume this is a MS SSMS or SSIS or VS deployment bug. That it does not overwrite, the previous connection although the SSMS interface says otherwise. Perhaps somebody can share some light. Having not checked the server before I made a change and deployed it I have no idea if the previous settings were changed manually by someone or the connection in the package was changed and deployed. Anyhow checking the job history shows it had been failing for awhile so it wasn't me so whoever and whenever a change was done by a previous developer didn't figure it our either or didn't bother or did not know how, or didn't observe it. Anyhow it is pointing to the correct server now!!!

Test Oracle connectivity using sqlplus without password

I am in a unique situation where I need to test my server connectivity to Oracle databases however I do not have access to any account or password.
Reason why the connectivity needs to be tested is because many times there are multiple layers of firewalls between my servers and the database, and also particularly recently while trying to access RAC/Exadata databases we realized that doing a telnet on the "scan" IP range (which were the only range visible to me) was not enough and that there are underlying physical/virtual IPs that are actually used to connect which were blocked. If I can test connectivity I can at least confirm the database is accessible.
I thought about connecting using sqlplus test#DB, where "test" account doesn't actually exist. If I get a reply saying that incorrect username/password logon denied, then at least I know the database connectivity is working because at least it reached the database to perform authentication. But I have audit concerns (whether DBAs will think someone is trying to hack the system) and also whether there's an actual way or command to do this test.
like #OldProgrammer pointed out, this is pretty much an optimal case for tnsping from the command line
tnsping MY_SERVICE_NAME
Here's a good post showing the basic options. Oh, and I'm pretty sure the DBA's can still see the traffic if they want to.

SQL Agent Job - Connection may not be configured correctly or you may not have the right permissions on this connection?

I'm getting this error when running an SSIS package through SQL Agent
Failed to acquire connection "ORACLE ADO.NET". Connection may not be configured correctly or you may not have the right permissions on this connection.
When I log on as the SQL Agent User and run the ssis package directly it is fine. When I then execute it through the SQL agent job, it fails.
I've read around extensively on this topic, and it seems a lot of the advise concerns how you are logged in, configuring of proxy accounts, etc, etc, etc, none of which has been helpful.
I am logging onto an Oracle database with an ADO.NET conncetion. The connection string is as follows (datasource, userid and password have been changed):
Data Source=DATASOURCE;User ID=userid;Password=password;Persist Security Info=True;Unicode=True;
I'm loading this from a registry setting using package configuration. To check that I am getting the correct string, I am writing it into a temporary log table. I am definately getting the string I need from the correct registry setting.
I've tested the oracle login credentials though PL/SQL developer, and it lets me login just fine.
As far as I can tell, as I'm using an explicit user name and password for the Oracle connection it just shouldn't matter who the SSIs pacakge is run as. The only point of failure that Ican see would be the reading of the information from the registry, but that seems fine.
I'm really quite baffled, I must confess, and would appreciate any help some of the splendid experts here can offer.
Many thanks,
James
Ok, tracked this one down after quite a lot of pain.
It was working fine on one environment, but not another, so I fired up Process Monitor (http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx) and ran a package through the SQL Agent job, comparing which system entities were hit on each enviroment.
On the failing environment, at the point of the bulk transfer operation, the package attempted to get the Oracle 11 client DLL, and then hung.
I knew that this was installed, and, moreoever, the DLL path was a system environment setting. After further investigation it was revealed that the server had not been rebooted since the Oracle Client install and the SQL Server Agent process had not bee recycled.
Yes, can you believe it, the old helpdesk fix "Can you reboot your computer?" worked.
Sigh!
We had issues at a client with running packages connecting to Oracle before stored on our sql server instance. The work around we found was to change the package property, protection level, to "Dont save Sensitive Data" and for security purposes, we encrypted the username and password in the package configuration that was decrypted by a udf in sql server. Of course, before you try the whole encryption part, I would recommend putting the username and password in the package configuration without encrypting the values to see if changing the protection level setting is the solution to your specific problem. I hope this helps.
I was getting this error when tnsnames.ora file did not have a valid entry for the environment

SQL server 2005 Connection Error: Cannot generate SSPI context

Provide Used: Microsoft OLE DB Provider for SQL Server. Can anyone help me with this..
I was trying to connect with LLBLgen
This MSDN blog page has some useful on this...
http://blogs.msdn.com/sql_protocols/archive/2006/12/02/understanding-kerberos-and-ntlm-authentication-in-sql-server-connections.aspx
In my case, I found the account was locked.
Reason was I previously, on another machine more than 3 times tried to login.
It did not recognise me - and tthen finally it locked my account.
Reopening account made all work fine.
br
Jan
The error you get is almost always caused by a problem with using Windows Authentication. Please try switching to a SQL server login (username/password), or make sure your current Windows login has access to the SQL server and database you're trying to connect to.
-Edoode
I fixed this by mapping a drive to the server running MSSQL. This seemed to generate some kind of trust that allows MSSQL to connect without this error even after a reboot.
I used to get this error sometimes when connecting to my local SQL Server with Windows Authentication. I never fixed it unfortunately - it went away when I reinstalled windows.
I think a reboot used to fix it - have you tried that? Not exactly the best solution, I know :P
Try to synchronize your date and time with the your domain's. The SSPI issue may be related to Active Directory authentication problems, some of them related to date and time changes. This is very simple to check and fix. Try it out!
There is a Microsoft KB article that addresses many of the reasons for this area (KB811889) at the following URL: http://support.microsoft.com/kb/811889.
A lot of Googling shows that one of the diagnostic steps helped most people who encountered the issue.
I recently had this exact issue where I'd get this error only when authenticating with certain accounts, but not others. Ultimately what was causing my problem was not mentioned in any KB or article I found on the net, but through trial and error I discovered that when the account used through SSPI authentication to SQL Server (2k8) happened to be in a large number of groups (in my case over 250) you would get the "Cannot Generate SSPI context" error. I suspect it has something to do with overflowing the security token that Kerberos uses and have seen similar strange authentication problems for user accounts in a large number of groups.
I get the problem when I have the time set differently on my client machine than either the server or the AD machine ( I was trying to test into the future).
Short Answer: Have you recently change the user the service is running as? Was there a system crash?
Long Answer:
I know this is old, but I want to post my experience that I just had.
We had spent hours Googling and found nothing that worked.
Eventually we ran across a set of actions that could cause this:
If you change the user that the Sql Server runs as (e.g. from Local System to a domain usr) and do certain updates and the server doesn't safely reboot -- you get this.
So, we set things back to Local System and bam it worked. Swapped it to the domain user, no worky worky. Ok. Swapped it to Local System, rebooted, swapped it to domain user, rebooted, bam -- worky worky. All was good in our world. Later that morning it crapped out again... still working on that now but the priority is changing and I'm not sure we're going to continue work on this problem so I wanted to post something in case this happens to someone else.
What caused ours was we did an update and, apparently, we learned that it's bad practice to let Sql Server run as Local System so we changed it to a domain user. We never rebooted, but restart the service. A month later, we do updates. We don't reboot. A month goes by and a power strip fries causing the server to have an unexpected shutdown. Yet another month later we find out problem because we rarely connect to this particular database (Interestingly, Sql Server 2008 worked fine... it was only 2005). Or... at least this is the best we've come across.
Our admin guy doesn't like Vista and likes to blame everything on Vista (refuses to let us test Windows 7)... so he Googled "sspi vista" or something like (I know it had sspi and vista, but it might have had another one... in case you need to Google it was well) that and ran across an article that pretty explained our scenario after we had a meeting we all remember these pieces and placed this picture together.
In my case, the time synchronization issue in the Windows 2003 domain environment was actually the issue.
This was quite easy to overlook as the two had been on two different time zones, whilst showing the same times on their clocks; which in effect was about 1 hour apart.
So other than the time on their watches, check the time zones as well.

Why do I get this error "[DBNETLIB][ConnectionRead (recv()).]General network error" with ASP pages

Occasionally, on a ASP (classic) site users will get this error:
[DBNETLIB][ConnectionRead (recv()).]General network error.
Seems to be random and not connected to any particular page. The SQL server is separated from the web server and my guess is that every once and a while the "link" goes down between the two. Router/switch issue... or has someone else ran into this problem before?
Using the same setup as yours (ie separate web and database server), I've seen it from time to time and it has always been a connection problem between the servers - typically when the database server is being rebooted but sometimes when there's a comms problem somewhere in the system. I've not seen it triggered by any problems with the ASP code itself, which is why you're seeing it apparently at random and not connected to a particular page.
I'd seen this error many times. It could be caused by many things including network errors too :).
But one of the reason could be built-in feature of MS-SQL.
The feature detects DoS attacks -- in this case too many request from web server :).
But I have no idea how we fixed it :(.
SQL server configuration Manager
Disable TCP/IP , Enable Shared Memory & Named Pipes
Good Luck !
Not a solution exactly and not the same environment. However I get this error in a VBA/Excel program, and the problem is I have a hanging transaction which has not been submitted in SQL Server Management Studio (SSMS). After closing SSMS, everything works. So the lesson is a hanging transaction can block sprocs from proceeding (obvious fact, I know!). Hope this help someone here.
open command prompt - Run as administrator and type following command on the client side
netsh advfirewall set allprofiles state off
FWIW, I had this error from Excel, which would hang on an EXEC which worked fine within SSMS. I've seen queries with problems before, which were also OK within SSMS, due to 'parameter sniffing' and unsuitable cached query plans. Making a minor edit to the SP cured the problem, and it worked OK afterwards in its orginal form. I'd be interested to hear if anyone has encountered this scenario too. Try the good old OPTION (OPTIMIZE FOR UNKNOWN) :)

Resources