I had a package that worked perfectly until i decided to put some of its tasks inside a sequence container (More on why I wanted to do that - How to make a SSIS transaction in my case?).
Now, i keep on getting an error -
[Execute SQL Task] Error: Failed to acquire connection "MyDatabase". Connection may not be configured correctly or you may not have the right permissions on this connection.
Why could this be happening and how do I fix it ?
I started writing my own examples to reply to your question. Then I remember that I met Matt Mason when I talked at a SQL Saturday in New Hampshire. He is the Microsoft Program Manager for SSIS.
While I spent 3 years between 2009 and 2011 writing nothing else but ETL code, I figured Matt had an article out there.
http://www.mattmasson.com/2011/12/design-pattern-avoiding-transactions/
Here is a high level summary of the approaches and the error you found.
[ERROR]
The error you found is related to MSDTC having issues. This must be configured and working correctly without any issues. Common issues are firewalls. Check out this post.
http://social.msdn.microsoft.com/Forums/sqlserver/en-US/3a5c847e-9c7e-4628-b857-4e6edaa7936c/sql-task-transaction-required?forum=sqlintegrationservices
[SOLUTION 1] - Use transactions at the package, task or container level.
Some data providers do not support MSDTC. Some tasks do not support transactions. This may be slow in performance since you are adding a new layer to support two phase commits.
http://technet.microsoft.com/en-us/library/aa213066(v=sql.80).aspx
[SOLUTION 2] - Use the following tasks.
A - BEGIN TRAN (EXECUTE SQL)
B - YOUR DATA FLOW
C - TEST THE RETURN CODE
1 - GOOD = COMMIT (EXECUTE SQL)
2 - FAILURE = ROLLBACK (EXECUTE SQL)
You must have the RetainSameConnection property set to True on the connection.
This forces all calls thru one session or SPID. All transaction management is now on the server.
[SOLUTION 3] - Write all you code so that it is restartable. This does not mean you go out and use check points.
One solution is to always use UPSERTS. Insert new data. Update old data. Deletes are only a flag in a table. This pattern allows a failed job to be executed many times with the same final state being achieved.
Another solution is to handle all error rows by placing them into a hospital table for manual inspection, correction, and insertion.
Why not use a database snapshot (keeps track of just changed records)? Take a snapshot before the ETL job. If an error occurs, restore the database from the snapshot. Last step is to remove the snapshot from the system to clean up house.
In short, I hope this is enough ideas to help you out.
While the transaction option is nice, it does have some down falls. If you need an example, just ping me.
Sincerely
J
What package protection level are you using? Don't Save Sensitive? Encrypt Sensitive with User Key? I'd recommend changing it to use Encrypt Sensitive with Password and enter a password. The password won't disappear.
Have you tried testing the connection to the database in the connection manager?
Related
To test error handling in an application, I'm looking for a way to let a transaction commit result in an error.
The application is written in C and uses ODBC to talk to a SQL Server 2017 data source. The application starts a database transaction and executes an arbitrary SQL (which I can change for the sake of the test). Then, it commits the transaction (using ODBCs SQLEndTran()). I want to build a test that verifies the error handling of the commit.
Is there an easy way and reliable way to let the commit fail, e.g by executing some specific SQL script before the commit, or by changing the database or the data source settings?
EDIT / clarification: What I need to fail is the transaction commit itself (specifically the SQLEndTran() complete call with an error). SQL before that shall complete successfully.
If you are able to time it correctly in a testing framework you can do few things:
1. Kill session from a separate connection in a testing framework.
2. Change firewall configuration to emulate network error.
3. Switch database to single user mode or stop SQL Service.
Easiest way is to force a divide by zero.
declare #SomeVal int = 0
set #SomeVal = 2 / #SomeVal
--EDIT--
Since I guess you want the commit to fail you could simply add a rollback right before the commit. Then the exception would be thrown on the commit statement.
I am trouble shooting an error in a package.
Update MYTABLE for MYCOLUMN (REF to task name):Error: Executing the query "..." failed with the following error: "Invalid column name 'MYCOLUMN'.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
I have verified the table and column exists, the length of the field is way excessive than what it needs that is 14 where it is declared as varchar(250).
I have verified the script works on the server in SSMS outside of the context of the package.
I have verified the connection and database in the package is as I expect.
Is there away to verify on the server. I did try to look at the Connection Managers tab on the package configuration itself i.e. in the Integration Services Catalogs->SSISDB->solutionfolder->..->package.dtsx->Configure context menu but it is empty.
Any ideas on how to troubleshoot?
Just to add more context the package contains 27 other tasks, 9 tasks in a row linked to this task but all set to on completion, all seem to be doing stuff independent of the other. 1 task is a loop doing stuff and the rest are single independent tasks. So I don't know at this stage if it is a cascading connection issue perhaps however; I am just reading what the log says.
I kicked off the package at 9:54am, the timestamp on the error log says 11:45am so nearly 2 hours into running is this log reported.
I would suggest the below things to troubleshoot the issue.
I would suggest you to just have this task and disable all other
tasks to troubleshoot the issue. So that you can focus on this issue
specifically. That will tell you whether connection is working fine
without issues.
I would suggest you to edit the task and see whether parameters are
set properly. Different providers have different way of setting
parameters. Again check whether parameters are proper. Execute SQL
Task
one more thing, may be you are pointing the package to different
connection than the one you used for SSMS. So, it is working in SSMS
and in the connection being used in the package is not having schema
changes yet done.
I finally figure it out before I read the previous offered suggestion so will give some credit if I can! FYI: We have a lot of dev servers. I clicked on the overview hyperlink in the All Execution logs and it said another server. Also I found the connection on the job calling the package not the package itself so I have learnt something today. Anyhow the job said one server but the overview said another so I again I was back to square one scratching my head.
Then I decided to open the connection manager on the job and select the field and make no change rather then cancelling I clicked ok not thinking about it and noticed the field changed to bold face. So I am assuming if you make a manual change on the server in SSMS to anything it shows up in bold which is kind of useful. So I can only assume this is a MS SSMS or SSIS or VS deployment bug. That it does not overwrite, the previous connection although the SSMS interface says otherwise. Perhaps somebody can share some light. Having not checked the server before I made a change and deployed it I have no idea if the previous settings were changed manually by someone or the connection in the package was changed and deployed. Anyhow checking the job history shows it had been failing for awhile so it wasn't me so whoever and whenever a change was done by a previous developer didn't figure it our either or didn't bother or did not know how, or didn't observe it. Anyhow it is pointing to the correct server now!!!
I am having some issues with database backups.
My database is in simple recovery mode and database backup occurs every night. We some times getting backup job failed and throwing the error as below.
ERROR:
The operating system returned the error '112(failed to retrieve text for this error. Reason: 15105) while attempting 'SetEndOfFile' on \backups\sqlbackups\finename
Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
Problems with the query / Property not set correctly / Parameters not set correctly: this is running from past 2 years.
I am still unsure why this happens some times.
If anyone having the same issue and figured out the possible reason, please discuss
Server info: SQL Server 2008 R2, Standard
Database info: simple recovery mode and is acting as a publisher with size 1.4TB
Thanks in advance
It seems you haven't enough space on your destination place. Make sure that there is enough free space on your drive and try again. If you use a third-party tool to backup your databases set "Auto-delete" option to delete your old backups.
Can jdbc connections which are closed due to database un-availability be recovered.
To give back ground I get following errors in sequence. It doesn't look to be manual re-start. The reason for my question is that I am told that the app behaved correctly without
the re-start. So if the connection was lost, can it be recovered, after a DB re-start.
java.sql.SQLException: ORA-12537: TNS:connection closed
java.sql.SQLRecoverableException: ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
IBM AIX RISC System/6000 Error: 2: No such file or directory
java.sql.SQLRecoverableException: ORA-01033: ORACLE initialization or shutdown in progress
No. The connection is "dead". Create a new connection.
A good approach is to use a connection pool, which will test if the connection is still OK before giving it to you, and automatically create a new connection if needed.
There are several open source connection pools to use. I've used Apache's JDCP, and it worked for me.
Edited:
Given that you want to wait until the database comes back up if it's down (interesting idea), you could implement a custom version of getConnection() that "waits a while and tries again" if the database doesn't respond.
p.s. I like this idea!
The connection cannot be recovered. What can be done is to failover the connection to another database instance. RAC and data guard installations support this configuration.
This is no problem for read-only transactions. However for transactions that execute DML this can be a problem, especially if the last call to the DB was a commit. In case of a commit the client cannot tell if the commit call completed or not. When did the DB fail; before executing the commit, or after executing the commit (but not sending back the acknowledgment to the client). Only the application has this logic and can do the right thing. If the application after failing over does not verify the state of the last transaction, duplicate transactions are possible. This is a known problem and most of us experienced it buying tickets or similar web transactions.
In my development environment, I seek to recreate a production issue we
face with MSSQL 2005. This issue has two parts:
The Problem
1) A deadlock occurs and MSSQL selects one connection ("Connection X") as the 'victim'.
2) All subsequent attempts to use "Connection X" fail (we use connection pooling). MSSQL says "The server failed to resume the transaction"
Of the two, #2 if more serious: since "connection X" is whacked every
"round robin" attempt to re-use "connection x" fails--and mysterious
"random" errors appear to the user. We must restart the server.
Why I Write
At this point, however, I wish to recreate problem #1. I can create a
deadlock easily.
But here's my issue: whereas in production, MSSQL chooses one
connection (SPID) as the 'deadlock victim', in my test environment, the deadlock just hangs...and hangs and hangs. Forever? I'm not sure, but I left it hanging overnight and it still hung in the morning.
So here's the question: how can I make sql server "choose a deadlock victim" when a deadlock occurs?
Attempts so Far
I tried setting the "lock_timeout" parameter via the jdbc url ("lockTimeout=5000"), however I got a different message than in production (in test,"Lock request time out period exceeded." instead of in production "Transaction (Process ID 59) was deadlocked on lock resources with another process and has been chosen as the deadlock victim.")
Some details on problem #2
I've researched this "unable to resume the transaction" problem and found a
few things:
bad exception handling may cause this problem. E.g.: the java code does
not close the Statement/PreparedStatement and the driver's implementation
of "Connection" is stuck with a bad/stale/old "transaction ID"
A jdbc driver upgrade may make the problem go away.
For now, however, I just want to recreate a deadlock and make sql server
"choose a deadlock victim".
thanks in advance!
Appendix A. Technical Environment
Development:
sql server 2005 SP3 (9.00.4035.00)
driver: sqljdbc.jar version 1.0
Jboss 3.2.6
jdbc url: jdbc:sqlserver://<>;
Production:
sql server 2005 SP2 (9.00.3042.00)
driver: sqljdbc.jar version 1.0
Jboss 3.2.6
jdbc url: jdbc:sqlserver://<>;
Appendix B. Steps to force a deadlock
get connection A
get connection B
run sql1 with connection A
run sql2 with connection B
run sql1 with connection B
run sql2 with connection A
where
sql1:
update member set name = name + 'x' WHERE member_id = 71
sql2:
update member set name = name + 'x' WHERE member_id = 72
The explanation of why the JDBc connection enters the incorrect state is given here: The server failed to resume the transaction... Why?. You should upgrade to JDBC SQL driver v2.0 before anything else. The link also contains advice on how to fix the application processing to avoid this situation, most importantly about avoiding the mix of JDBC transaction API with native Transact-SQL transactions.
As for the deadlock repro: you did not recreate a deadlock in test. You just blocked waiting for a transaction to commit. A deadlock is a different thing and SQL Server will choose a victim, you do not have to set deadlock priority, lock timeouts or anything. Deadlock priorities are a completely different topic and are used to choose the victim in certain scenarios like high priority vs. low priority overnight batch processing.
Any deadlock investigation should start with understanding the deadlock, if you want to eliminate it. The Dedlock Graph Event Class in Profiler is the perfect starting point. With the deadlock graph info you can see what resources is the deadlock occuring on and what statements are involved. Most times the solution is either to fix the order of updates in application (always follow the same order) or fix the access path (ie. add an index).
Update
The UPDATE .. WHERE key IN (SELECT ...) is usually deadlocking because the operation is not atomic. Multiple threads can return the the same IN list because the SELECT part does not lock anything. This is just a guess, to properly validate you must look at the deadlock info.
To validate your hand made test for deadlocks you should validate that the blocking SPIDs form a loop. Look at SELECT session_id, blocking_session_id FROM sys.dm_exec_requests WHERE blocking_session_id <> 0. If the result contains a loop (eg. A blocked by B and B blocked by A) adn the server does not trigger a deadlock, that's a bug. However, what you will find is that the blocking list will not form a loop, will be something A blocked by B and B blocked by C and C not in the list, which means you have done something wrong in the repro test.
You can specify a Deadlock priority ffor a the session using
SET DEADLOCK_PRIORITY LOW | MEDIUM | HIGH
See this MSDN link for details.
You can also use the following command to view the open transactions
DBCC OPENTRAN (db_name)
This command may help you identify what is causing the deadlock. See MSDN for more info.
What are the queries being run? What is actually causing the deadlock?
You say you have two connections A and B. A runs sql1 then sql2, while B runs sql2 then sql1. So, what is the work (queries) being done? More importantly, where are the transactions? What isolation level are you using? What opens/closes the transactions? (Yes, this leads to questioning the exception processing used by your drivers--if they don't detect and properly process a returned "it didn't work" message, then you absolutely need to take them out back and shoot them--bullets or penicillin, your call.)
Understanding the explicit details underlying the deadlock will allow you to recreate it. I'd first try to recreate it "below" your application -- that is, open up two windows in SSMS, and recreate the application's actions step by step, by hand if/as necessary. Once you can do this, step back and replicate that in your application--all on your development servers, of course!
(A thought--are your Dev databases copies of your Production DBs? If Dev DBs are orders of magnitude smaller than Prod ones, your queries may be the same but what SQL does "under the hood" will be vastly different.)
A last thought, SQL will detect and process deadlocks automatically (I really don't think you can disable this), if yours are running overnight then I don't think you have a deadlock, but rather just a conventional locking/blocking issue.
[Posting this now -- going to look something up, will check back later.]
[Later]
Interesting--SQL Server 2005 compact edition does not detect deadlocks, it only does timeouts. You're not using that in Dev, are you?
I see no way to "turn off" or otherwise control the deadlock timeout period. I hit and messed with deadlocks just last week, and some arbitrary testing then indicated that deadlocks are detected and resolved in (for our dev server) under 5 seconds. It truly seems like you don't have deadlocks on you Dev machine, just blocking. But realize that this stuff is hard for "armchair DBAs" to analyzed, you'd really need to sit down and do some serious analysis of what's going on within the system when this problem is occuring.
[ This is a response to the answers. The UI does not allow longer 'comments' on answers]
What are the queries being run? What is actually causing the deadlock?
In my test environment, I ran very simple queries:
sql1:
UPDATE principal SET name = name + '.' WHERE principal_id = 71
sql2:
UPDATE principal SET name = name + '.' WHERE principal_id = 72
Then executed them in chiastic/criss-cross order, i.e. w/o any commits.
connectionA
sql1
connectionB
sql2
sql1
sql2
This to me seems like a basic example of a deadlock. If this a "mere lock", however, and not a deadlock, please disabuse me of this notion.
In production, our 'problematic query' ("prodbad") looked liked this:
UPDATE post SET lock_flag = ?
WHERE thread_id IN (SELECT thread_id FROM POST WHERE post_id = ?)
Note a few things:
1) This "prod problem query" actually works. AFAIK it had a
deadlock this one time
2) I suspect that the problem lies in page locking, i.e. pessimistic locking due to reads elsewhere in the transaction
3) I do not know what sql this transaction executed prior to this query.
4 )This query is an example of "I can do that in one sql statement"
processing, which while seems clever to the programmer ultimately causes much more IO than running two queries:
queryM:SELECT thread_id FROM POST WHERE post_id = ?
queryN: UPDATE post SET lock_flag = ? WHERE thread_id = <>
*>(A thought--are your Dev databases copies of your Production DBs?
If Dev DBs are orders of magnitude smaller than Prod ones, your queries may be the same but >what SQL does "under the hood" will be vastly different.)*
In this case the prod and dev db's differ. "Prod server" had tons of data. "Dev db" had little data. The queries were very differently. All I wanted to do was recreate a deadlock.
*> The server failed to resume the transaction... Why?. You should upgrade to JDB
C SQL driver v2.0 before anything else.*
Thanks. We plan on this change. Switching drivers introduces a little bit of risk, so we'll need to run some test..
To recap:
I had the "bright idea" to force a simple deadlock and see if my connection was "whacked/hosed/borked/etc." The deadlock, however, behaved differently than in production.