I am working on a Windows application (WinForms) that requires four different processes to run (stored procedures) that include the ability to commit or rollback the entire process in the instance that any of the four processes raised an error. I'll describe my situation below (and please excuse my SQL ignorance as I am not a DBA):
Windows App: Click Process Button
Database: Initiate Procedure 1 (and presumably BEGIN TRANSACTION MyProcessTransaction)
Windows App: Click Process Button
Database: Initiate Procedure 1 (and presumably BEGIN TRANSACTION MyProcessTransaction)
Windows App: Receives notification from Procedure 1 that it completed successfully and updates a progress bar.
Database: Initiate Procedure 2 (and somehow still encapsulated under MyProcessTransaction)
Windows App: Receives notification from Procedure 2 that it completed successfully and updates a progress bar.
Database: Initiate Procedure 3 (and somehow still encapsulated under MyProcessTransaction) and raises an error and rolls back the transaction MyProcessTransaction
Is this possible? I tried four query windows and began a transaction in the first, then put try catches in the other windows and attempted to perform work in each to simulate this but when I got to the fourth window and intentionally raised an error, I received the exception that there was no BEGIN TRANSACTION for me to be able to ROLLBACK. Any suggestions?
Well, as usual, after doing some digging, I found a solution that works for me. Not that it's the sole solution, but it's efficient and it works:
https://stackoverflow.com/a/13228090/176806
I never put 2 and 2 together to figure out that the TRANSACTION is connection specific, so, I did as suggested and created a transaction via the connection object and passed that transaction from call to call, then in the instance of an error, roll back that transaction. If anyone feels like answering something else, I'm still open to any suggestions.
Related
I'm wondering is there a way to recognize the OfflineComamd is being executed or internal flag or something to represent this command has been passed or mark it has been executed successfully. I have issue in recognizing the command is passed or not with unstable internet. I keep retrieve the records from database and comparing each and every time to see this has been passed or not. But due to the flow of my application, I'm finding it very difficult to avoid duplicates.IS there any automatic process to make sure commands executed automatically or something else?
2nd question, I can use UITimer to check isOffline() to make sure internet is connected or not on the forms. Is there something equivalent on server page or where queries is written to see internet is disconnected or not. When the control moved to queries and internet is disconnected I see the dialog open from form page being frozen for unlimited time and will not end. I have to close and re-open the app to continue the synchronization process.At the same time I cannot set a timeout for dialog because I'm not sure how long it will take the complete the Synchronization process. Please advise.
Extending on the same topic but I have created a new issue just to give more clarity on my questions.
executeOfflineCommand skips a command while executing from storage on Android
There is no way to know if a connection will stay stable as it requires knowledge of the future. You can work like transaction services do where the server side processes an offline command as a transaction using the approach of 2-phase commit.
In this approach you have an algorithm similar to this:
Client sends command to server
Server returns a special unique ID for the command
Client asks server to perform the unique id
Server acknowledges that the command was performed
If the first 2 stages didn't complete you just do that again. The worst thing that could happen is some orphan commands on the server.
If the 3rd option didn't complete you just do it again. The server knows whether it processed the command and will just acknowledge it if it was already processed.
I have a workflow like this as a Azure Logic App:
Read from Azure Table -> Process it in a Function -> Send Data to SQL Server -> Send an email
Currently we can check if the previous action ended with an error and based on that we do not execute any further steps.
Is it possible in Logic Apps to perform a Rollback of actions when one of the steps goes wrong? Meaning can we undo all the steps to the beginning when something in step 3 goes wrong, for example.
Thanks in advance.
Regards.
Currently there is no support for rollback in Logic Apps (as they are not transnational).
Note that Logic Apps provide out-of-the-box resiliency against intermittent errors (retry strategies), which should minimize execution failures.
You can add custom handling of errors (e.g. following your example, if something goes in step 3, you can explicitly handle the failure and add rollback steps). Take a look at https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-exception-handling for details. You
Depending on whether steps in your logic app are idempotent you can also make use of the resubmit feature. It allows you to re-trigger the run with same trigger contents with which the original run instance was invoked. Take a look at https://www.codit.eu/blog/2017/02/logic-apps-resubmit-considerations/ for a good overview of this feature.
I have an ASP.NET reporting interface that displays several values that are returned from a SQL Server backend. Once logged in, the browser page is never reloaded, but several screen areas are updated on a timer through AJAX calls.
My problem is that the screen areas are intermittently displaying values that are coming from previous AJAX calls. I have thoroughly and intensively investigated the problem for a number of days and I haven't been able to specifically pinpoint what is causing it, or how to completely overcome it. Currently the incorrect values are very infrequent (3 in 50,000, say), but I should be getting none whatsoever! These are some details about the setup:
the screen refresh timer runs every 30 seconds to update all screen areas
there is a 1.5 second lag between the screen area updates
the values used to decide which SQL stored proc to run to get the correct values from the database for each screen area are being passed in to the ASP.NET interface correctly - I know with 100% certainty which stored proc to run
the stored proc returns its values to a SQLDataReader
it is the reader that is sometimes yielding values that seem to be "buffered" from previous AJAX interactions, i.e. I am running the correct stored proc with the correct variables, but the values returned are not what I get if I ran that precise same command in a SQL query interface - they are results from a previous call
the SQL connection, command and reader are all created and instantiated afresh for each interaction and disposed of correctly after use through the IDisposable interface
I have swapped the reader for a dataset, with no difference in results
my AJAX calls are synchronous (async=false), so they should each complete before the next one is run, but I also have the 1.5 sec delay between screen area updates and 30 secs between cycles, so they shouldn't run into each other in any event.
What is frustrating is that the reader is running a SQL statement without throwing an exception, but seemingly returning results from a previous interaction - and then only very seldom, but one incorrect result is one too many.
I am not using ASP.NET state management at all - switched off in web.config.
What am I missing?
Profile the sql statements that are arriving at sql server with sql server profile. You can narrow the potential locations of the bug that way. next, start the fiddler http debugger and verify the http requests. and let us know what you found!
it is the reader that is sometimes
yielding values that seem to be
"buffered" from previous AJAX
interactions, i.e. I am running the
correct stored proc with the correct
variables, but the values returned are
not what I get if I ran that precise
same command in a SQL query interface
- they are results from a previous call the SQL connection, command and
reader are all created and
instantiated afresh for each
interaction and disposed of correctly
after use through the IDisposable
interface I have swapped the reader
for a dataset, with no difference in
results
It would help to show us your code here. If the data objects contain the wrong values, you have to demonstrate with certainty that the correct SPs are being run on the back end and, if applicable, that the correct parameters are being passed to the back end. You should be able to step through the call in the Visual Studio debugger.
I'm testing an import script on a shared web host I just got, but I found that transactions are blocked after running it for 20 minutes or so. I assume this is to avoid overloading the database, but even when I import one item every 1 second, I still run into the problem. To be specific, when I try to save an object I receive the error:
DatabaseError: current transaction is aborted, commands ignored until end of transaction block
I've tried to delay for a few hours after this happens, but there is still a block. The only way to resume importing is to completely restart the importing program. Because of this, I reasoned that all I need to do is reconnect to the DB. This might not be true, but it's wroth a try.
So my question is this, how can I disconnect and reconnect the DB connection in Django? Is this possible?
Most likely some other database error occurred before this one, but your code ignored it and went forward with the transaction in a broken state.
I have an application that is now 4+ years old that is exhibiting some odd behavior on our latest deployment. The application uses nHibernate for all inserts / updates / selects, etc. We are currently using .NET 2.0, and nHibernate 1.2 (I know, we need to upgrade)
This deployment is on Windows 2008 Server x64, IIS 7.5 - what I have seen so far is that the application runs, but is unable to insert or update records in the DB - reads seem fine so far, but writes are a problem. SOME writes actually work, inserts into some small tables, but most never even make it to the DB.
Using SQL Profiler, the insert / updates never make it to the server, and turning log4net up to DEBUG, and show_sql true - the select statements appear, but the insert / update statements never make it into the log at all, and never show up at the server.
What's even more odd is that the application seems to be oblivious to this - the commandandclose runs without exception (open session in view with an httpmodule), the domain objects come back with uuid's generated, etc. but never get persisted.
Certainly an upgrade is due, but I would hate to try it during a deployment, and without time to accurately test the app. Any ideas?
My guess is that the default ISession FlushMode has been changed from Auto to Never or Commit. Never means that the session will flush when Flush() is called by the application; Commit means that the session will flush when a transaction is committed.
Back out your current deployment and return to what you had before. Then look for the mistake someone made. If it used to insert and now does not, then something is wrong with your current code. If it isn't creating the insert/update statments, then I'd look first at where they are supposed to be created. Did the current deplyment actually insert record or update them in dev? Did anybody test that or were you relying onthe fact that it didn;t pop up an error? If it did work in dev and doesn't work in prod, I'd look at the envirnmental differnences between dev and prod.
Both good answers, the problem was in the deployment. The web.config was setup for IIS6, and the deployment to IIS7 did not properly setup the open session in view HttpModule that is used to commit the transaction. Changing the pipeline mode from Integrated to Classic solved the problem.