How to stop iBatis commiting inserts, updates and deletes? - ibatis

I have a problem using iBatis as database framework for a web application. I want to manually commit the transaction after a couple of inserts, but iBatis auto commits it after every insert. How can I prevent that?
Here is my SqlMapConfig.xml file content:
<sqlMapConfig>
<settings enhancementEnabled="true"
errorTracingEnabled="true"
useStatementNamespaces="false" />
<transactionManager type="JDBC" commitRequired="false" >
<dataSource type="JNDI">
<property name="DataSource"
value="java:/comp/env/jdbc/MY_DB" />
</dataSource>
</transactionManager>
<sqlMap resource="com/my/common/Common.xml" />
</sqlMapConfig>

I'm also learning Ibatis/MyBatis and gradually getting my grips into it. I cannot tell you about all the different attributes of the sqlMapConfig as I am uncertain on some of the settings but I take you want several inserts to be included in a single transaction? Similar to a batch update, wrap the batch in a single transaction. This example is based on IBatis 2.3.4
try {
sqlMap.startTransaction();
sqlMap.startBatch();
for (final ObjectXXXDTO objectReference1 : GenericObjectList) {
sqlMap.insert("createExample1", objectReference1);
}
sqlMap.insert("createExample2", otherReference2);
sqlMap.insert("createExample3", otherReference3);
sqlMap.executeBatch();
sqlMap.commitTransaction();
} catch (final SQLException e) {
throw new XXXException(e);
} finally {
try {
sqlMap.endTransaction();
} catch (SQLException e) {
throw new XXXException(e);
}
}
However note that whenever you are using a batched set of statements the database generated keys will not be generated until you have called the executeBatch() method. This means if you are using selectKey to update your objects with the generated keys they will return null. If you have any object that requires the newly generated key as part of another insert then you can have this insert or update before the startBatch().
Again I'm uncertain if this is the approach you are after but I thought of posting it anyways. Thanks

the "commitRequired" attribute of the element is what you need.
The <transactionManager> element also allows an optional attribute commitRequired that can be true or
false. Normally iBATIS will not commit transactions unless an insert, update, or delete operation has been
performed. This is true even if you explicitly call the commitTransaction() method. This behavior
creates problems in some cases.
myBatis guide
Also, I would recommend you to read iBatis in Action.

Related

Persist Apache Flink window

I'm trying to use Flink to consume a bounded data from a message queue in a streaming passion. The data will be in the following format:
{"id":-1,"name":"Start"}
{"id":1,"name":"Foo 1"}
{"id":2,"name":"Foo 2"}
{"id":3,"name":"Foo 3"}
{"id":4,"name":"Foo 4"}
{"id":5,"name":"Foo 5"}
...
{"id":-2,"name":"End"}
The start and end of messages can be determined using the event id. I want to receive such batches and store the latest (by overwriting) batch on disk or in memory. I can write a custom window trigger to extract the events using the start and end flags as shown below:
DataStream<Foo> fooDataStream = ...
AllWindowedStream<Foo, GlobalWindow> fooWindow = fooDataStream.windowAll(GlobalWindows.create())
.trigger(new CustomTrigger<>())
.evictor(new Evictor<Foo, GlobalWindow>() {
#Override
public void evictBefore(Iterable<TimestampedValue<Foo>> elements, int size, GlobalWindow window, EvictorContext evictorContext) {
for (Iterator<TimestampedValue<Foo>> iterator = elements.iterator();
iterator.hasNext(); ) {
TimestampedValue<Foo> foo = iterator.next();
if (foo.getValue().getId() < 0) {
iterator.remove();
}
}
}
#Override
public void evictAfter(Iterable<TimestampedValue<Foo>> elements, int size, GlobalWindow window, EvictorContext evictorContext) {
}
});
but how can I persist the output of the latest window. One way would be using a ProcessAllWindowFunction to receive all the events and write them to disk manually but it feels like a hack. I'm also looking into the Table API with Flink CEP Pattern (like this question) but couldn't find a way to clear the Table after each batch to discard the events from the previous batch.
There are a couple of things getting in the way of what you want:
(1) Flink's window operators produce append streams, rather than update streams. They're not designed to update previously emitted results. CEP also doesn't produce update streams.
(2) Flink's file system abstraction does not support overwriting files. This is because object stores, like S3, don't support this operation very well.
I think your options are:
(1) Rework your job so that it produces an update (changelog) stream. You can do this with toChangelogStream, or by using Table/SQL operations that create update streams, such as GROUP BY (when it's used without a time window). On top of this, you'll need to choose a sink that supports retractions/updates, such as a database.
(2) Stick to producing an append stream and use something like the FileSink to write the results to a series of rolling files. Then do some scripting outside of Flink to get what you want out of this.

Spring Data Mongo: query by example(QBE) does not work for records without "_class" property

If I have records in mongoDb without "_class" property, query by example does not work. My database is populated by third-party non-java microservice, by the way.
Example:
{
"_id":"5ec3f00d98326d4c0ead815f",
"first_name":"firstName",
"last_name":"lastName"
}
Then MongoRepository.findAll(Example<S> example) is not able to find that record. If I add correct "_class" field manually, all works as expected.
Has someone solved this issue?
Spring Data mongo v.3.0.0.RC1
Ok, there is UntypedExampleMatcher that must be used in this case:
ExampleMatcher matcher = UntypedExampleMatcher.matching()
.withIgnoreNullValues()
.withIgnoreCase();
Entity probe = ...
Example<Entity> entityExample = Example.of(probe, matcher);
entityRepo.findAll(entityExample);
But this approach does not work by some reason. It is very long-running, and ends with exception at the end.
UPDATE:
Because of incorrect "probe", my request tried to fetch from DB thousands of records, therefore it ended up with exception. After fixing it, search with QBE approach works as a charm.

Two arrayCollection. Only one is an ArrayCollection [duplicate]

since 2 weeks, we are having this problem while trying to flush new elements:
CRITICAL: Doctrine\ORM\ORMInvalidArgumentException:
A new entity was found through the relationship 'Comment#capture' that was not configured to cascade persist operations for entity
But the capture is already in the database, and we are getting it by a findOneBy, so if we cascade persist it, or persist it, we get a
Table constraint violation: duplicate entry.
The comments are created in a loop with differents captures, with a new, and all required field are set.
With all of the entities persisted and / or got by a findOne (and all valid), the flush still fails.
I'm on this issue since a while, so please help me
I had the same problem and it was the same EntityManager. I wanted to insert an object related ManyToOne. And I don't want a cascade persist.
Example :
$category = $em->find("Category", 10);
$product = new Product();
$product->setCategory($category)
$em->persist($product);
$em->flush();
This throws the same exception for me.
So the solution is :
$category = $em->find("Category", 10);
$product = new Product();
$product->setCategory($category)
$em->merge($product);
$em->flush();
In my case a too early call of
$this->entityManager->clear();
caused the problem. It also disappeared by only doing a clear on the recent object, like
$this->entityManager->clear($capture);
My answer is relevant for topic, but not very relevant for your particular case, so for those googling I post this, as the answers above did not help me.
In my case, I had the same error with batch-processing entities that had a relation and that relation was set to the very same entity.
WHAT I DID WRONG:
When I did $this->entityManager->clear(); while processing batch of entities I would get this error, because next batch of entities would point to the detached related entity.
WHAT WENT WRONG:
I did not know that $this->entityManager->clear(); works the same as $this->entityManager->detach($entity); only detaches ALL of the repositorie`s entities.
I thought that $this->entityManager->clear(); also detaches related entities.
WHAT I SHOULD HAVE DONE:
I should have iterated over entities and detach them one by one - that would not detach the related entity that the future entities pointed to.
I hope this helps someone.
First of all, you should take better care of your code, I see like 3 differents indentations in your entity and controller - this is hard to read, and do not fit the Symfony2 coding standards.
The code you show for your controller is not complete, we have no idea from where $this->activeCapture is coming. Inside you have a $people['capture'] which contains a Capture object I presume. This is very important.
If the Capture in $people['capture'] is persisted / fetched from another EntityManager than $this->entityManager (which, again, we do not know from where it come), Doctrine2 have no idea that the object is already persisted.
You should make sure to use the same instance of the Doctrine Entity Manager for all those operations (use spl_object_hash on the EM object to make sure they are the same instance).
You can also tell the EntityManager what to do with the Capture object.
// Refreshes the persistent state of an entity from the database
$this->entityManager->refresh($captureEntity);
// Or
// Merges the state of a detached entity into the
// persistence context of this EntityManager and returns the managed copy of the entity.
$captureEntity = $this->entityManager->merge($captureEntity);
If this does not help, you should provide more code.
The error:
'Comment#capture' that was not configured to cascade persist operations for entity
The problem:
/**
* #ORM\ManyToOne(targetEntity="Capture", inversedBy="comments")
* #ORM\JoinColumn(name="capture_id", referencedColumnName="id",nullable=true)
*/
protected $capture;
dont configured the cascade persist
try with this:
/**
* #ORM\ManyToOne(targetEntity="Capture", inversedBy="comments", cascade={"persist", "remove" })
* #ORM\JoinColumn(name="capture_id", referencedColumnName="id",nullable=true)
*/
protected $capture;
Refreshing the entity in question helped my case.
/* $item->getProduct() is already set */
/* Add these 3 lines anyway */
$id = $item->getProduct()->getId();
$reference = $this->getDoctrine()->getReference(Product::class, $id);
$item->setProduct($reference);
/* Original code as follows */
$quote->getItems()->add($item);
$this->getDoctrine()->persist($quote);
$this->getDoctrine()->flush();
Despite my $item already having a Product set elsewhere, I was still getting the error.
Turns out it was set via a different instance of EntityManager.
So this is a hack of sorts, by retrieving id of the existing product, and then retrieving a reference of it, and using setProduct to "refresh" the whatever connection. I later fixed it by ensuring I have and use only a single instance of EntityManager in my codebase.
I got this error too when tried to add new entity.
A new entity was found through the relationship 'Application\Entity\User#chats'
that was not configured to cascade persist operations for entity: ###.
To solve this issue: Either explicitly call EntityManager#persist() on this unknown entity or
configure cascade persist this association in the mapping for example #ManyToOne(..,cascade={"persist"}).
My case was that I tried to save entity, that shouldn't be saved. Entity relations was filled and tried to be saved (User has Chat in Many2Many, but Chat was a temporary entity), but there were some collisions.
So If I use cascade={"persist"} I get unwanted behaviour - trash entity is saved. My solution was to remove non-saving entity out of any saving entities:
// User entity code
public function removeFromChats(Chat $c = null){
if ($c and $this->chats->contains($c)) {
$this->chats->removeElement($c);
}
}
Saving code
/* some code witch $chat entity */
$chat->addUser($user);
// saving
$user->removeFromChats($chat);
$this->getEntityManager()->persist($user);
$this->getEntityManager()->flush();
I want to tell about my case as that might be helpful to somebody.
Given two entities: AdSet and AdSetPlacemnt. AdSet has the following property:
/**
* #ORM\OneToOne(targetEntity="AdSetPlacement", mappedBy="adSet", cascade={"persist"})
*
* #JMS\Expose
*/
protected $placement;
Then error appears when I try to delete some AdSet objects in a cycle after 1st iteration
foreach($adSetIds as $adSetId) {
/** #var AdSet $adSet */
$adSet = $this->adSetRepository->findOneBy(["id" => $adSetId]);
$this->em->remove($adSet);
$this->em->flush();
}
Error
A new entity was found through the relationship 'AppBundle\Entity\AdSetPlacement#adSet' that was not configured to cascade persist operations for entity: AppBundle\Entity\AdSet#00000000117d7c930000000054c81ae1. To solve this issue: Either explicitly call EntityManager#persist() on this unknown entity or configure cascade persist this association in the mapping for example #ManyToOne(..,cascade={"persist"}). If you cannot find out which entity causes the problem implement 'AppBundle\Entity\AdSet#__toString()' to get a clue.
Solution
The solution was to add "remove" to $placement cascade options to be:
cascade={"persist","remove"}. This guarantees that Placement also becomes detached. Entity manager will "forget" about Placement object thinking of it as "removed" once AdSet is removed.
Bad alternative
When trying to figure out what's going on I've seen a couple answers or recommendations to simply use entity manager's clear method to completely clear persistence context.
foreach($adSetIds as $adSetId) {
/** #var AdSet $adSet */
$adSet = $this->adSetRepository->findOneBy(["id" => $adSetId]);
$this->em->remove($adSet);
$this->em->flush();
$this->em->clear();
}
So that code also works, the issue gets solved but it's not always what you really wanna do. Indeed it's happens quite rarely that you actually need to clear entity manager.

SQL Server (2014) Stored Procedure doesn't exist [duplicate]

I have this query and I get the error in this function:
var accounts = from account in context.Accounts
from guranteer in account.Gurantors
select new AccountsReport
{
CreditRegistryId = account.CreditRegistryId,
AccountNumber = account.AccountNo,
DateOpened = account.DateOpened,
};
return accounts.AsEnumerable()
.Select((account, index) => new AccountsReport()
{
RecordNumber = FormattedRowNumber(account, index + 1),
CreditRegistryId = account.CreditRegistryId,
DateLastUpdated = DateLastUpdated(account.CreditRegistryId, account.AccountNumber),
AccountNumber = FormattedAccountNumber(account.AccountType, account.AccountNumber)
})
.OrderBy(c=>c.FormattedRecordNumber)
.ThenByDescending(c => c.StateChangeDate);
public DateTime DateLastUpdated(long creditorRegistryId, string accountNo)
{
return (from h in context.AccountHistory
where h.CreditorRegistryId == creditorRegistryId && h.AccountNo == accountNo
select h.LastUpdated).Max();
}
Error is:
There is already an open DataReader associated with this Command which must be closed first.
Update:
stack trace added:
InvalidOperationException: There is already an open DataReader associated with this Command which must be closed first.]
System.Data.SqlClient.SqlInternalConnectionTds.ValidateConnectionForExecute(SqlCommand command) +5008639
System.Data.SqlClient.SqlConnection.ValidateConnectionForExecute(String method, SqlCommand command) +23
System.Data.SqlClient.SqlCommand.ValidateCommand(String method, Boolean async) +144
System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) +87
System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) +32
System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) +141
System.Data.SqlClient.SqlCommand.ExecuteDbDataReader(CommandBehavior behavior) +12
System.Data.Common.DbCommand.ExecuteReader(CommandBehavior behavior) +10
System.Data.EntityClient.EntityCommandDefinition.ExecuteStoreCommands(EntityCommand entityCommand, CommandBehavior behavior) +443
[EntityCommandExecutionException: An error occurred while executing the command definition. See the inner exception for details.]
System.Data.EntityClient.EntityCommandDefinition.ExecuteStoreCommands(EntityCommand entityCommand, CommandBehavior behavior) +479
System.Data.Objects.Internal.ObjectQueryExecutionPlan.Execute(ObjectContext context, ObjectParameterCollection parameterValues) +683
System.Data.Objects.ObjectQuery`1.GetResults(Nullable`1 forMergeOption) +119
System.Data.Objects.ObjectQuery`1.System.Collections.Generic.IEnumerable<T>.GetEnumerator() +38
System.Linq.Enumerable.Single(IEnumerable`1 source) +114
System.Data.Objects.ELinq.ObjectQueryProvider.<GetElementFunction>b__3(IEnumerable`1 sequence) +4
System.Data.Objects.ELinq.ObjectQueryProvider.ExecuteSingle(IEnumerable`1 query, Expression queryRoot) +29
System.Data.Objects.ELinq.ObjectQueryProvider.System.Linq.IQueryProvider.Execute(Expression expression) +91
System.Data.Entity.Internal.Linq.DbQueryProvider.Execute(Expression expression) +69
System.Linq.Queryable.Max(IQueryable`1 source) +216
CreditRegistry.Repositories.CreditRegistryRepository.DateLastUpdated(Int64 creditorRegistryId, String accountNo) in D:\Freelance Work\SuperExpert\CreditRegistry\CreditRegistry\Repositories\CreditRegistryRepository.cs:1497
CreditRegistry.Repositories.CreditRegistryRepository.<AccountDetails>b__88(AccountsReport account, Int32 index) in D:\Freelance Work\SuperExpert\CreditRegistry\CreditRegistry\Repositories\CreditRegistryRepository.cs:1250
System.Linq.<SelectIterator>d__7`2.MoveNext() +198
System.Linq.Buffer`1..ctor(IEnumerable`1 source) +217
System.Linq.<GetEnumerator>d__0.MoveNext() +96
This can happen if you execute a query while iterating over the results from another query. It is not clear from your example where this happens because the example is not complete.
One thing that can cause this is lazy loading triggered when iterating over the results of some query.
This can be easily solved by allowing MARS in your connection string. Add MultipleActiveResultSets=true to the provider part of your connection string (where Data Source, Initial Catalog, etc. are specified).
You can use the ToList() method before the return statement.
var accounts =
from account in context.Accounts
from guranteer in account.Gurantors
select new AccountsReport
{
CreditRegistryId = account.CreditRegistryId,
AccountNumber = account.AccountNo,
DateOpened = account.DateOpened,
};
return accounts.AsEnumerable()
.Select((account, index) => new AccountsReport()
{
RecordNumber = FormattedRowNumber(account, index + 1),
CreditRegistryId = account.CreditRegistryId,
DateLastUpdated = DateLastUpdated(account.CreditRegistryId, account.AccountNumber),
AccountNumber = FormattedAccountNumber(account.AccountType, account.AccountNumber)
})
.OrderBy(c=>c.FormattedRecordNumber)
.ThenByDescending(c => c.StateChangeDate)
.ToList();
public DateTime DateLastUpdated(long creditorRegistryId, string accountNo)
{
var dateReported = (from h in context.AccountHistory
where h.CreditorRegistryId == creditorRegistryId && h.AccountNo == accountNo
select h.LastUpdated).Max();
return dateReported;
}
Use the syntax .ToList() to convert object read from db to list to avoid being re-read again.
Here is a working connection string for someone who needs reference.
<connectionStrings>
<add name="IdentityConnection" connectionString="Data Source=(LocalDb)\v11.0;AttachDbFilename=|DataDirectory|\IdentityDb.mdf;Integrated Security=True;MultipleActiveResultSets=true;" providerName="System.Data.SqlClient" />
</connectionStrings>
In my case, using Include() solved this error and depending on the situation can be a lot more efficient then issuing multiple queries when it can all be queried at once with a join.
IEnumerable<User> users = db.Users.Include("Projects.Tasks.Messages");
foreach (User user in users)
{
Console.WriteLine(user.Name);
foreach (Project project in user.Projects)
{
Console.WriteLine("\t"+project.Name);
foreach (Task task in project.Tasks)
{
Console.WriteLine("\t\t" + task.Subject);
foreach (Message message in task.Messages)
{
Console.WriteLine("\t\t\t" + message.Text);
}
}
}
}
I dont know whether this is duplicate answer or not. If it is I am sorry. I just want to let the needy know how I solved my issue using ToList().
In my case I got same exception for below query.
int id = adjustmentContext.InformationRequestOrderLinks.Where(
item => item.OrderNumber == irOrderLinkVO.OrderNumber
&& item.InformationRequestId == irOrderLinkVO.InformationRequestId)
.Max(item => item.Id);
I solved like below
List<Entities.InformationRequestOrderLink> links =
adjustmentContext.InformationRequestOrderLinks
.Where(item => item.OrderNumber == irOrderLinkVO.OrderNumber
&& item.InformationRequestId == irOrderLinkVO.InformationRequestId)
.ToList();
int id = 0;
if (links.Any())
{
id = links.Max(x => x.Id);
}
if (id == 0)
{
//do something here
}
It appears that you're calling DateLastUpdated from within an active query using the same EF context and DateLastUpdate issues a command to the data store itself. Entity Framework only supports one active command per context at a time.
You can refactor your above two queries into one like this:
return accounts.AsEnumerable()
.Select((account, index) => new AccountsReport()
{
RecordNumber = FormattedRowNumber(account, index + 1),
CreditRegistryId = account.CreditRegistryId,
DateLastUpdated = (
from h in context.AccountHistory
where h.CreditorRegistryId == creditorRegistryId && h.AccountNo == accountNo
select h.LastUpdated
).Max(),
AccountNumber = FormattedAccountNumber(account.AccountType, account.AccountNumber)
})
.OrderBy(c=>c.FormattedRecordNumber)
.ThenByDescending(c => c.StateChangeDate);
I also noticed you're calling functions like FormattedAccountNumber and FormattedRecordNumber in the queries. Unless these are stored procs or functions you've imported from your database into the entity data model and mapped correct, these will also throw excepts as EF will not know how to translate those functions in to statements it can send to the data store.
Also note, calling AsEnumerable doesn't force the query to execute. Until the query execution is deferred until enumerated. You can force enumeration with ToList or ToArray if you so desire.
In my case, I had opened a query from data context, like
Dim stores = DataContext.Stores _
.Where(Function(d) filter.Contains(d.code)) _
... and then subsequently queried the same...
Dim stores = DataContext.Stores _
.Where(Function(d) filter.Contains(d.code)).ToList
Adding the .ToList to the first resolved my issue. I think it makes sense to wrap this in a property like:
Public ReadOnly Property Stores As List(Of Store)
Get
If _stores Is Nothing Then
_stores = DataContext.Stores _
.Where(Function(d) Filters.Contains(d.code)).ToList
End If
Return _stores
End Get
End Property
Where _stores is a private variable, and Filters is also a readonly property that reads from AppSettings.
As a side-note...this can also happen when there is a problem with (internal) data-mapping from SQL Objects.
For instance...
I created a SQL Scalar Function that accidentally returned a VARCHAR...and then...used it to generate a column in a VIEW. The VIEW was correctly mapped in the DbContext...so Linq was calling it just fine. However, the Entity expected DateTime? and the VIEW was returning String.
Which ODDLY throws...
"There is already an open DataReader associated with this Command
which must be closed first"
It was hard to figure out...but after I corrected the return parameters...all was well
In addition to Ladislav Mrnka's answer:
If you are publishing and overriding container on Settings tab, you can set MultipleActiveResultSet to True. You can find this option by clicking Advanced... and it's going to be under Advanced group.
I solved this problem by changing
await _accountSessionDataModel.SaveChangesAsync();
to
_accountSessionDataModel.SaveChanges();
in my Repository class.
public async Task<Session> CreateSession()
{
var session = new Session();
_accountSessionDataModel.Sessions.Add(session);
await _accountSessionDataModel.SaveChangesAsync();
}
Changed it to:
public Session CreateSession()
{
var session = new Session();
_accountSessionDataModel.Sessions.Add(session);
_accountSessionDataModel.SaveChanges();
}
The problem was that I updated the Sessions in the frontend after creating a session (in code), but because SaveChangesAsync happens asynchronously, fetching the sessions caused this error because apparently the SaveChangesAsync operation was not yet ready.
For those finding this via Google;
I was getting this error because, as suggested by the error, I failed to close a SqlDataReader prior to creating another on the same SqlCommand, mistakenly assuming that it would be garbage collected when leaving the method it was created in.
I solved the issue by calling sqlDataReader.Close(); before creating the second reader.
Most likely this issue happens because of "lazy loading" feature of Entity Framework. Usually, unless explicitly required during initial fetch, all joined data (anything that stored in other database tables) is fetched only when required. In many cases that is a good thing, since it prevents from fetching unnecessary data and thus improve query performance (no joins) and saves bandwidth.
In the situation described in the question, initial fetch is performed, and during "select" phase missing lazy loading data is requested, additional queries are issued and then EF is complaining about "open DataReader".
Workaround proposed in the accepted answer will allow execution of these queries, and indeed the whole request will succeed.
However, if you will examine requests sent to the database, you will notice multiple requests - additional request for each missing (lazy loaded) data. This might be a performance killer.
A better approach is to tell to EF to preload all needed lazy loaded data during the initial query. This can be done using "Include" statement:
using System.Data.Entity;
query = query.Include(a => a.LazyLoadedProperty);
This way, all needed joins will be performed and all needed data will be returned as a single query. The issue described in the question will be solved.
The same error happened to me when I was looping and updating data on
IEnumerable<MyClass>
When I changed the looped-on collection to be List<MyClass>, and filled it by converting by .ToList(), it solved and updated without any errors.
I had the same error, when I tried to update some records within read loop.
I've tried the most voted answer MultipleActiveResultSets=true and found, that it's just workaround to get the next error 
New transaction is not allowed because there are other threads running
in the session
The best approach, that will work for huge ResultSets is to use chunks and open separate context for each chunk as described in 
SqlException from Entity Framework - New transaction is not allowed because there are other threads running in the session
Well for me it was my own bug. I was trying to run an INSERT using SqlCommand.executeReader() when I should have been using SqlCommand.ExecuteNonQuery(). It was opened and never closed, causing the error. Watch out for this oversight.
This is extracted from a real world scenario:
Code works well in a Stage environment with MultipleActiveResultSets is set in the connection string
Code published to Production environment without MultipleActiveResultSets=true
So many pages/calls work while a single one is failing
Looking closer at the call, there is an unnecessary call made to the db and needs to be removed
Set MultipleActiveResultSets=true in Production and publish cleaned up code, everything works well and, efficiently
In conclusion, without forgetting about MultipleActiveResultSets, the code might have run for a long time before discovering a redundant db call that could be very costly, and I suggest not to fully depend on setting the MultipleActiveResultSets attribute but also find out why the code needs it where it failed.
I am using web service in my tool, where those service fetch the stored procedure. while more number of client tool fetches the web service, this problem arises. I have fixed by specifying the Synchronized attribute for those function fetches the stored procedure. now it is working fine, the error never showed up in my tool.
[MethodImpl(MethodImplOptions.Synchronized)]
public static List<t> MyDBFunction(string parameter1)
{
}
This attribute allows to process one request at a time. so this solves the Issue.
In my case, I had to set the MultipleActiveResultSets to True in the connection string.
Then it appeared another error (the real one) about not being able to run 2 (SQL) commands at the same time over the same data context! (EF Core, Code first)
So the solution for me was to look for any other asynchronous command execution and turn them to synchronous, as I had just one DbContext for both commands.
I hope it helps you

JDO for GAE: Update objects returned by a query

There is persistable class Project, each instance of which has list of objects of Version type (owned one-to-many relation between Project and Version classes).
I'm getting several Version objects from datastore with query, change them and try to save:
PersistenceManager pm = PMF.get().getPersistenceManager();
Transaction tx = pm.currentTransaction();
try {
tx.begin();
Query q = pm.newQuery(Version.class, "... filters here ...");
q.declareParameters(" ... parameters here ...");
List<Version> versions = (List<Version>)q.execute(... parameters here ...);
if (versions.size() > 0) {
for (Version version : versions) {
version.setOrder(... value here ...);
}
pm.makePersistentAll(versions);
}
tx.commit();
return newVersion.toVersionInfo();
} finally {
pm.close();
}
Everything is executed without errors, query actually returns several objects, properties are set correctly in runtime versions list, but objects properties are not updated in datastore.
Generally, as far as I understand, versions should be saved even without calling
pm.makePersistentAll(versions);
, since object properties are set before pm.close(), but nothing is saved, if this row is omitted, as well.
At the same time, if I retrieve instance of type Project (which owns many instances of type Version) with pm.getObjectById() method, and walk through all related Version objects in the loop, all changes are saved correctly (without calling pm.makePersistent() method).
The question is, what's wrong with such way of updating objects? Why Version object properties are not updated in datastore?
I could not find anything helpful neither in JDO nor in GAE documentation.
Thanks for advice about logs from DataNucleus and sympathy from Peter Recore :)
Frankly, I missed few important points in my question.
Actually,between
tx.begin();
and
Query q = pm.newQuery(Version.class, "... filters here ...");
I am retrieving Project instance, and after Version instances updating loop I am persisting some one more Version object.
So, I actually retrieved some of versions list twice, and according to committing order in logs, Project instance was saved twice, as well. The second save operation overwritten the first one.
I changed the order of operations, and got expected behavior.

Resources