My one of the table holds data for business transactions and I have to run a job when there's no transaction for interval 5 minutes, I am trying to achieve this using Timer() in java. So to get notified if any transaction is executed I need some triggering ( I do not have code access as it is 3rd party tool ) for that purpose I am using database change notification.
However while running this I get below error very often. I am using java 1.6, ojdbc6.jar for connection purpose and the application is running on weblogic with oracle 11g database.
Exception in thread "Thread-4" java.lang.IndexOutOfBoundsException at java.nio.Buffer.checkIndex(Buffer.java:540) at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:139) at oracle.jdbc.driver.NTFConnection.unmarshalOneNSPacket(NTFConnection.java:334) at oracle.jdbc.driver.NTFConnection.run(NTFConnection.java:182)
Please modify the example http://appcrawler.com/wordpress/2012/08/28/jdbc-and-oracle-database-change-notification/ for your listener and check if issue still exists. My understading that issue is not related to Oracle DB, but part of Java realization of your code. Please, add java tag into your question as well.
Related
I am using MVC razor in one of the Web Application using .Net
One of the screen in my web application has 2 DropDownList, 1 button and 1 Grid. When I click on 1st DDL, the screen instead of focusing on 2nd DDL it starts to keep on loading.. and gives exception: the timeout period expired prior to completion of the operation
DDL select query is: select * from Table where year='2014'and name='pqr'(used in code behind Entity Framework) which throws this exception. Same thing is happening in DB, when I run the above LINQ query in SSMS, it keeps on executing.
If I restart SQL Service, everything starts to work normal and the result is shown but after some time if I try same then again the same issue occurs. In this case, I need to restart the sql service again and again which shouldn't happen.
In SSMS, I checked Activity Monitoring and found that sometimes some SPId's are getting blocked and that time only screen/query keep on loading/executing.
Using DBCC INPUTBUFFER(spid);I came to know which Stored Procedure / Table is blocking (please correct if my understanding is wrong here). I also tried using with (NOLOCK) on certain tables inside Stored Proc, which also didn't workout for me.
How can I resolve this issue at DB level? My application is already live.
Note: This issue occurs intermittently.
I am working on an ios project that has a Sybase (ultralite) database that is synchronized with a Sybase Sql Anywhere 12 database using mobilink.
Everything was properly, until i decided today to add some fields to the main database so that they synchronize to the main database.
I have updated the schema of the consolidated database from the main engine, then i have updated the schema of the remote database from the consolidated engine, and then i mapped the added fields together, and I deployed a new ultralite database.
Please note that it's not the first time I do a similar task, i always add fields, and sync databases..
after the update, when i synchronize using the blank ultralite database, mobilink will fail giving only this error: Synchronization Failed: -1305 (MOBILINK_COMMUNICATIONS_ERROR) %1:201 %2: %3:0
I have researched Error Number 201 in sybase and it points to: SQLE_NOT_PUBLIC_ID
and in the sybase documentation the error's probably cause is:
"The option specified in the SET OPTION statement is PUBLIC only. You cannot define this option for any other user."
I have tried to redeploy, I have tried to move the engine to a windows pc, all give the same error.. and i have no clue where this SET OPTION statement came from and how can i solve it..
Any hints are appreciated!
The problem was just caused by small network timeout value while setting up mobilink parameters.
info.stream_parms = (char*) #"host=192.168.0.100;port=3309;timeout=1"
i just changed the value from timeout=1 to timeout=300 and it worked!
My problem with database starts with situation where I cant really modify anything in database. My project specialist has limited time to help me. Here is the thing:
My user in Oracle database has older schema than actual production one. My section is working on stable and older version. After every release we are keep getting this issue, that something is set (maybe on Jenkins, maybe not) automatically to update our database to version, which we dont want. We tried to resolve it by changing password to user, but it produce new issue. Automat is trying to log in and when it gets wrong pass error, it is trying again. Oracle 11g has this limit 10 failed login attempts, after which it is locking the whole user account, which we use to connect do db by our application server.
We can not investigate this by turning on auditing failed logins, because it takes place on database space and our db-guy has not allowed us to do it, because if we exceed the space limit (which is about 11GB) the whole database will be dead (our project is not as important to do it). Another thing is that person who probably set the scripts which are our problem doesnt work anymore here.
Our workaround was to manually unlock account to get the connection by application server, and then wait a few secs to get locked again (but the connection of app server was stable). It is stupid, you must admit and the problem is when the connection drops by any reason - app server will not get it automatically, we have to do it manually which is not a solution. I have reconsidered it all again, my db-guy has no time to help me, I have no tools and access rights to investigate where this script or whatever other problem causing thing is beeing executed, so I started to thinking: what if we set limit of failed login attempts to unlimited? Will this decrease the performance of database? Will this generate any special new problems? Maybe the solution would be change the PASSWORD_LOCK_TIME to small value? I am asking you to some arguments that I could provide to my db-guy to convince him to use this new workarounds so I can start working again with code and not this database problems.
I had a package that worked perfectly until i decided to put some of its tasks inside a sequence container (More on why I wanted to do that - How to make a SSIS transaction in my case?).
Now, i keep on getting an error -
[Execute SQL Task] Error: Failed to acquire connection "MyDatabase". Connection may not be configured correctly or you may not have the right permissions on this connection.
Why could this be happening and how do I fix it ?
I started writing my own examples to reply to your question. Then I remember that I met Matt Mason when I talked at a SQL Saturday in New Hampshire. He is the Microsoft Program Manager for SSIS.
While I spent 3 years between 2009 and 2011 writing nothing else but ETL code, I figured Matt had an article out there.
http://www.mattmasson.com/2011/12/design-pattern-avoiding-transactions/
Here is a high level summary of the approaches and the error you found.
[ERROR]
The error you found is related to MSDTC having issues. This must be configured and working correctly without any issues. Common issues are firewalls. Check out this post.
http://social.msdn.microsoft.com/Forums/sqlserver/en-US/3a5c847e-9c7e-4628-b857-4e6edaa7936c/sql-task-transaction-required?forum=sqlintegrationservices
[SOLUTION 1] - Use transactions at the package, task or container level.
Some data providers do not support MSDTC. Some tasks do not support transactions. This may be slow in performance since you are adding a new layer to support two phase commits.
http://technet.microsoft.com/en-us/library/aa213066(v=sql.80).aspx
[SOLUTION 2] - Use the following tasks.
A - BEGIN TRAN (EXECUTE SQL)
B - YOUR DATA FLOW
C - TEST THE RETURN CODE
1 - GOOD = COMMIT (EXECUTE SQL)
2 - FAILURE = ROLLBACK (EXECUTE SQL)
You must have the RetainSameConnection property set to True on the connection.
This forces all calls thru one session or SPID. All transaction management is now on the server.
[SOLUTION 3] - Write all you code so that it is restartable. This does not mean you go out and use check points.
One solution is to always use UPSERTS. Insert new data. Update old data. Deletes are only a flag in a table. This pattern allows a failed job to be executed many times with the same final state being achieved.
Another solution is to handle all error rows by placing them into a hospital table for manual inspection, correction, and insertion.
Why not use a database snapshot (keeps track of just changed records)? Take a snapshot before the ETL job. If an error occurs, restore the database from the snapshot. Last step is to remove the snapshot from the system to clean up house.
In short, I hope this is enough ideas to help you out.
While the transaction option is nice, it does have some down falls. If you need an example, just ping me.
Sincerely
J
What package protection level are you using? Don't Save Sensitive? Encrypt Sensitive with User Key? I'd recommend changing it to use Encrypt Sensitive with Password and enter a password. The password won't disappear.
Have you tried testing the connection to the database in the connection manager?
I want to use nHibernate in a windows service. If the systems boots, it might start my service before the database. In that case, configuration of nHibernate fails and the service crashes. So now I'm wondering how I can check if the database service has already been started. In case it has not yet started, my service should wait a bit and try again later.
If your service always runs on the same machine as SQL Server, You should be using ServiceInstaller.ServicesDependedOn to tell Windows(SCM) that you depend on 'MSSQLSERVER' (the name of service that runs SQL Server).
From MSDN:
A service can require other services to be running before it can
start. The information from this property is written to a key in the
registry. When the user (or the system, in the case of automatic
startup) tries to run the service, the Service Control Manager (SCM)
verifies that each of the services in the array has already been
started.
ServiceInstaller is the class that is used by InstallUtil when it installs your service. Other installation packages including InstallShield also support this windows functionality. Equivalent SC command.
So your service will only start after SQL Server is already running. But even in this case, it might still be a good idea to offload all potentially long running startup procedures to the background thread. Do as little as possible in OnStart method. Ideally you would just spawn a new initialization thread that would take care of NHibernate session factory initialization. If for some reasons you still want to do this in OnStart, then you should consider retrying NHibernate initialization and calling ServiceBase.RequestAdditionalTime to avoid:
Error 1053: The service did not respond to the start or control
request in a timely fashion.
Ideally your service should not depend on the database availability because it might be running on a remote machine. The service is an 'always on' process that should tolerate intermittent database connectivity issues.
No clue if there are better ways, but in your service startup, check for the system uptime. If this is less then let's say 5 minutes, wait for (5 minutes - Uptime) and after that start the rest of the service as you normally would.
See the following for Calculating server uptime gives "The network path was not found"
This is not a solution however for when your service tries to connect to a SQL which is down, however if this happens you want to handle the exception and actually be notified that the SQL is down. Very unlikely you want the service to keep trying without you yourself beeing aware the SQL is down.
You could use ServiceController class and call its static method GetServices() to get the list of services. It will give an array of services, find the right one and check its status.
See ServiceController on MSDN
Currently I am making sure I can establish a connection to the database needed and running a default query (configurable). If this is successful I proceed to start the service.
What I've found in some cases is that even if the MSSQL service is started it doesn't guarantee that you can connect to it and execute queries against it.