2sxc - "The server failed to resume the transaction" with multiple webApi.post - sql-server

When trying to use multiple successive webApi.post inside a for loop,
for (i = 1; i <= indexLoops; i++) {
$2sxc(#Dnn.Module.ModuleID).webApi.post('app/auto/content/entity', {}, newItem);
}
if it repeats 2-3 times, it works fine, but if "i" reaches 4+, it will give this error:
The server failed to resume the transaction. Desc:d100000001. The transaction active in this session has been committed or aborted by another session.
Creating a 200ms delayed loop fixes it.
Is this a SQL server limitation or a 2scx controller issue? Can it affect users if they both attempt a post simultaneously?

Related

How to prevent overwriting of database for requests from different instances (Google App Engine using NDB)

My Google App Engine application (Python3, standard environment) serves requests from users: if there is no wanted record in the database, then create it.
Here is the problem about database overwriting:
When one user (via browser) sends a request to database, the running GAE instance may temporarily fail to respond to the request and then it creates a new process to respond this request. It results that two instances respond to the same request. Both instances make a query to database almost in the same time, and each of them finds there is no wanted record and thus creates a new record. It results as two repeated records.
Another scenery is that for certain reason, the user's browser sends twice requests with time difference less than 0.01 second, which are processed by two instances at the server side and thus repeated records are created.
I am wondering how to temporarily lock the database by one instance to prevent the database overwriting from another instance.
I have considered the following schemes but have no idea whether it is efficient or not.
For python 2, Google App Engine provides "memcache", which can be used to mark the status of query for the purpose of database locking. But for python3, it seems that one has to setup a Redis server to rapidly exchange database status among different instances. So, how about the efficiency of database locking by using Redis?
The usage of session module of Flask. The session module can be used to share data (in most cases, the login status of users) among different requests and thus different instances. I am wondering the speed to exchange the data between different instances.
Appended information (1)
I followed the advice to use transaction, but it did not work.
Below is the code I used to verify the transaction.
The reason of failure may be that the transaction only works for CURRENT client. For multiple requests at the same time, the server side of GAE will create different processes or instances to respond to the requests, and each process or instance will have its own independent client.
#staticmethod
def get_test(test_key_id, unique_user_id, course_key_id, make_new=False):
client = ndb.Client()
with client.context():
from google.cloud import datastore
from datetime import datetime
client2 = datastore.Client()
print("transaction started at: ", datetime.utcnow())
with client2.transaction():
print("query started at: ", datetime.utcnow())
my_test = MyTest.query(MyTest.test_key_id==test_key_id, MyTest.unique_user_id==unique_user_id).get()
import time
time.sleep(5)
if make_new and not my_test:
print("data to create started at: ", datetime.utcnow())
my_test = MyTest(test_key_id=test_key_id, unique_user_id=unique_user_id, course_key_id=course_key_id, status="")
my_test.put()
print("data to created at: ", datetime.utcnow())
print("transaction ended at: ", datetime.utcnow())
return my_test
Appended information (2)
Here is new information about usage of memcache (Python 3)
I have tried the follow code to lock the database by using memcache, but it still failed to avoid overwriting.
#user_student.route("/run_test/<test_key_id>/<user_key_id>/")
def run_test(test_key_id, user_key_id=0):
from google.appengine.api import memcache
import time
cache_key_id = test_key_id+"_"+user_key_id
print("cache_key_id", cache_key_id)
counter = 0
client = memcache.Client()
while True: # Retry loop
result = client.gets(cache_key_id)
if result is None or result == "":
client.cas(cache_key_id, "LOCKED")
print("memcache added new value: counter = ", counter)
break
time.sleep(0.01)
counter+=1
if counter>500:
print("failed after 500 tries.")
break
my_test = MyTest.get_test(int(test_key_id), current_user.unique_user_id, current_user.course_key_id, make_new=True)
client.cas(cache_key_id, "")
memcache.delete(cache_key_id)
If the problem is duplication but not overwriting, maybe you should specify data id when creating new entries, but not let GAE generate a random one for you. Then the application will write to the same entry twice, instead of creating two entries. The data id can be anything unique, such as a session id, a timestamp, etc.
The problem of transaction is, it prevents you modifying the same entry in parallel, but it does not stop you creating two new entries in parallel.
I used memcache in the following way (using get/set ) and succeeded in locking the database writing.
It seems that gets/cas does not work well. In a test, I set the valve by cas() but then it failed to read value by gets() later.
Memcache API: https://cloud.google.com/appengine/docs/standard/python3/reference/services/bundled/google/appengine/api/memcache
#user_student.route("/run_test/<test_key_id>/<user_key_id>/")
def run_test(test_key_id, user_key_id=0):
from google.appengine.api import memcache
import time
cache_key_id = test_key_id+"_"+user_key_id
print("cache_key_id", cache_key_id)
counter = 0
client = memcache.Client()
while True: # Retry loop
result = client.get(cache_key_id)
if result is None or result == "":
client.set(cache_key_id, "LOCKED")
print("memcache added new value: counter = ", counter)
break
time.sleep(0.01)
counter+=1
if counter>500:
return "failed after 500 tries of memcache checking."
my_test = MyTest.get_test(int(test_key_id), current_user.unique_user_id, current_user.course_key_id, make_new=True)
client.delete(cache_key_id)
...
Transactions:
https://developers.google.com/appengine/docs/python/datastore/transactions
When two or more transactions simultaneously attempt to modify entities in one or more common entity groups, only the first transaction to commit its changes can succeed; all the others will fail on commit.
You should be updating your values inside a transaction. App Engine's transactions will prevent two updates from overwriting each other as long as your read and write are within a single transaction. Be sure to pay attention to the discussion about entity groups.
You have two options:
Implement your own logic for transaction failures (how many times to
retry, etc.)
Instead of writing to the datastore directly, create a task to modify
an entity. Run a transaction inside a task. If it fails, the App
Engine will retry this task until it succeeds.

Operational Error: An Existing connection was forcibly closed by the remote host. (10054)

Am getting this Operational Error, periodically probably when the application is not active or idle for long hours. On refreshing the page it will vanish. Am using mssql pyodbc connection string ( "mssql+pyodbc:///?odbc_connect= ...") in Formhandlers and DbAuth of gramex
How Can I keep the connection alive in gramex?
Screenshot of error
Add pool_pre_ping and pool_recycle parameters.
pool_pre_ping will normally emit SQL equivalent to “SELECT 1” each time a connection is checked out from the pool; if an error is raised that is detected as a “disconnect” situation, the connection will be immediately recycled. Read more
pool_recycle prevents the pool from using a particular connection that has passed a certain age. Read more
eg: engine = create_engine(connection_string, encoding='utf-8', pool_pre_ping=True, pool_recycle=3600)
Alternatively, you can add these parameters for FormHandler in gramex.yaml. This is required only for the first FormHandler with the connection string.
kwargs:
url: ...
table: ...
pool_pre_ping: True
pool_recycle: 60

SQL deadlock in ColdFusion thread

I'm trying to figure out why I would be getting a deadlock error when executing a simple query inside a thread. I'm running CF10 with SQL Server 2008 R2, on a Windows 2012 server.
Once per day, I've got a process that caches a bunch of blog feeds in a database. For each blog feed, I create a thread and do all the work in inside it. Sometimes it runs fine with no errors, other times I get the following error in one or more of the threads:
[Macromedia][SQLServer JDBC Driver][SQLServer]Transaction (Process ID
57) was deadlocked on lock resources with another process and has been
chosen as the deadlock victim. Rerun the transaction.
This deadlock condition happens when I run a query that sets a flag indicating that the feed is being updated. Obviously, this query could happen concurrently with other threads that are updating other feeds.
From my research, I think I can solve the problem by putting a exclusive named lock around the query, but why would I need to do that? I've never had to deal with deadlocks before, so forgive my ignorance on the subject. How is it possible that I can run into a deadlock condition?
Since there's too much code to post, here's a rough algorithm:
thread name="#createUUID()#" action="run" idBlog=idBlog {
try {
var feedResults = getFeed(idBlog);
if (feedResults.errorCode != 0)
throw(message="failed to get feed");
transaction {
/* just a simple query to set a flag */
dirtyBlogCache(idBlog); /* this is where i get the deadlock */
cacheFeedResults(idBlog, feedResults);
}
} catch (any e) {
reportError(e);
}
}
} /* thread */
This approach has been working well for me.
<cffunction name="runQuery" access="private" returntype="query">
arguments if necessary
<cfset var whatever = QueryNew("a")>
<cfquery name="whatever">
sql
</cfquery>
<cfreturn whatever>
</cffunction>
attempts = 0;
myQuery = "not a query";
while (attempts <= 3 && isQuery(myQuery) == false) {
attempts += 1;
try {
myQuery = runQuery();
}
catch (any e) {
}
}
After all, the message does say to re-run the transaction.

Cannot SQLBulkCopy Error 40197 with %d code of 4815 (Connection Forcibly Closed)

Developing with VS 2013 ASP.NET MVC 5 Web Project and Separate Azure hosted SQL Server database.
At the bottom is all my error information from Visual Studio 2013. I've narrowed down the problem and found a link to the Microsoft Description of the problem without a solution. I'm Developing with Database First and Entity Framework 6. ASP.NET 4 MVC & Razor. I connect to a SQL Azure database - I think this is whats falling over i've checked the logs for Azure website etc already
I have delimited text files (that were uploaded to APP_DATA) that I load into a DataTable then use SQL-Bulk Copy to dump content into Azure Database. All works 100% fine so long as my files are only containing a few hundred records. But I need to insert 20MB files with approx 200,000 rows. When I try the big files I get an Error at the point ASP.NET is performing the Bulk Copy. No matter what I set for batch size etc it bails around the 4000 row mark every-time. I've exhausted all options and at my whits end, I even tried Scaling up the Azure database to Business from FREE web. I tried scaling up the website too. Here is the code :
public void BatchBulkCopy(DataTable dataTable, string DestinationTbl, int batchSize,int identity)
{
try {
// Set the timeout.
System.Diagnostics.Debug.WriteLine("Start SQL Bulk Copy");
using (SqlBulkCopy sbc = new SqlBulkCopy("Server=tcp:eumtj4loxy.database.windows.net,1433;Database=AscWaterDB;User ID=HIDDEN#HIDDEN;Password=XXXXXXX;Trusted_Connection=False;Encrypt=True;Connection Timeout=900;", SqlBulkCopyOptions.TableLock))
{
sbc.DestinationTableName = DestinationTbl;
sbc.BulkCopyTimeout = 0;
// Number of records to be processed in one go
sbc.BatchSize = 1000;
// Add your column mappings here
sbc.ColumnMappings.Add("D2001_SPID", "SupplyPointId");
sbc.ColumnMappings.Add("D2002_ServiceCategory", "D2002_ServiceCategory");
sbc.ColumnMappings.Add("D2025_NotifyDisconnection/Reconnection", "D2025_NotifyDisconnectionReconnection");
sbc.ColumnMappings.Add("WaterBatchId", "WaterBatchId");
sbc.ColumnMappings.Add("D2003_Schedule3", "D2003_Schedule3");
sbc.ColumnMappings.Add("D2004_ExemptCustomerFlag", "D2004_ExemptCustomerFlag");
sbc.ColumnMappings.Add("D2005_CustomerClassification", "D2005_CustomerClassification");
sbc.ColumnMappings.Add("D2006_29e", "D2006_29e");
sbc.ColumnMappings.Add("D2007_LargeVolAgreement", "D2007_LargeVolAgreement");
sbc.ColumnMappings.Add("D2008_SICCode", "D2008_SICCode");
sbc.ColumnMappings.Add("D2011_RateableValue", "D2011_RateableValue");
sbc.ColumnMappings.Add("D2015_SPIDVacant", "D2015_SPIDVacant");
sbc.ColumnMappings.Add("D2018_TroughsDrinkingBowls", "D2018_TroughsDrinkingBowls");
sbc.ColumnMappings.Add("D2019_WaterServicesToCaravans", "D2019_WaterServicesToCaravans");
sbc.ColumnMappings.Add("D2020_OutsideTaps", "D2020_OutsideTaps");
sbc.ColumnMappings.Add("D2022_TransitionalArrangements", "D2022_TransitionalArrangements");
sbc.ColumnMappings.Add("D2024_Unmeasurable", "D2024_Unmeasurable");
sbc.ColumnMappings.Add("D2014_FarmCroft", "D2014_FarmCroft");
// Finally write to server
System.Diagnostics.Debug.WriteLine("Write Bulk Copy to Server " + DateTime.Now.ToString());
sbc.WriteToServer(dataTable); // Fails here when I upload a 20MB CSV with 190,000 rows
sbc.Close();
}
// Ignore this I don't get to this code unless loading a file thats only got a few records
WaterBatch obj = GetWaterBatch(identity); // Now we can get the WaterBatch
obj.StopDateTime = DateTime.Now;
Edit(obj);
Save();
System.Diagnostics.Debug.WriteLine("Finished " + DateTime.Now.ToString());
}
catch (Exception ex)
{
Exception ex2 = ex;
while (ex2.InnerException != null)
{
ex2 = ex2.InnerException;
}
Console.WriteLine(ex.InnerException);
throw;
}
}
My $Exception says :
$exception {"A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.)"} System.Exception {System.Data.SqlClient.SqlException}
My InnerException is, if I go into Inner then Inner exception etc its the same message with Hresult of -2146232060 then -2147467259:
InnerException {"An existing connection was forcibly closed by the remote host"} System.Exception {System.ComponentModel.Win32Exception}
UPDATED INFO :
Explanation of Error from Microsoft is (below). I am getting an Error number 40197. Then Microsoft say to look for the %d code - which I get to be 4815. Question is what now, where can I go from here to get into on a 40197 with a %d of 4815:
I got the following info regarding my error from this link: http://msdn.microsoft.com/en-us/library/windowsazure/ff394106.aspx
40197
17
The service has encountered an error processing your request. Please try again. Error code %d.
You will receive this error, when the service is down due to software or hardware upgrades, hardware failures, or any other failover problems. The error code (%d) embedded within the message of error 40197 provides additional information about the kind of failure or failover that occurred. Some examples of the error codes embedded within the message of error 40197 are 40020, 40143, 40166, and 40540.
Reconnecting to your SQL Database server will automatically connect you to a healthy copy of your database. Your application must catch error 40197, log the embedded error code (%d) within the message for troubleshooting, and try reconnecting to SQL Database until the resources are available, and your connection is established again.
I was getting the exact same error during a Bulk Insert. In my case, it was a varchar column that was overflowing. I just needed to increase the character limit and the problem was solved.
Just increase the Length of variable even if the value being stored is much lesser than than the size of the variable, worked for me.

What does: "Cannot commit when autoCommit is enabled” error mean?

In my program, I’ve got several threads in pool that each try to write to the DB. The number of threads created is dynamic. When the number of threads created is only one, all works fine. However, when there are multi-thread executing, I get the error:
org.apache.ddlutils.DatabaseOperationException: org.postgresql.util.PSQLException: Cannot commit when autoCommit is enabled.
I’m guessing, perhaps since each thread executes in parallel, two threads are trying to write at the same time and giving this error.
Do you think this is the case, if not, what could be causing this error?
Otherwise, if what I said is the problem, what I can do to fix it?
In your jdbc code, you should turn off autocommit as soon as you fetch the connection. Something like this:
DataSource datasource = getDatasource(); // fetch your datasource somehow
Connection c = null;
try{
c = datasource.getConnection();
c.setAutoCommit(false);

Resources