'NpgsqlBinaryImporter' does not contain a definition for 'Cancel' - npgsql

As I write this, the documentation at https://www.npgsql.org/doc/copy.html#cancel says:
Import operations can be cancelled at any time by calling the Cancel()
method on the importer object. No data is committed to the database
before the importer is closed or disposed.
Export operations can be cancelled as well, also by calling Cancel().
I just updated my Npgsql package from 3.1.10 to 4.0.7, and now I get the error 'NpgsqlBinaryImporter' does not contain a definition for 'Cancel' for code similar to the following:
void WriteStuff(IEnumerable<RowInfo> enumerable, NpgsqlConnection conn)
{
using (var writer = conn.BeginBinaryImport("COPY blah blah FROM STDIN (FORMAT BINARY)"))
{
try
{
foreach (var rowInfo in enumerable)
{
writer.StartRow();
writer.Write(...); // blah blah
}
writer.Close();
}
catch
{
writer.Cancel();
throw;
}
}
}
It looks like this commit made Cancel() private.
So what is the correct way to cancel a bulk operation now? Do I need to wrap it in a transaction?
[And given the answer, I should just get rid of the try-catch code in the above code and just let the exception happen. Also the call to writer.Close() should change to writer.Complete(). ]

Npgsql 4.0 changed the COPY API around cancellation in a significant way, see the release notes.
In a nutshell, you must now explicitly call Complete() on NpgsqlBinaryImporter in order to commit the import; disposing it without calling Complete() will cancel the operation. This was done in order to make sure that exceptions don't cause a commit, and is aligned with how .NET TransactionScope work.
I'll update the documentation on this - thanks for pointing it out!

Related

Executing a non-query requires a transaction

I migrated my code from WebApi2 to NET5 and now I have a problem when executing a non-query. In the old code I had:
public void CallSp()
{
var connection = dataContext.GetDatabase().Connection;
var initialState = connection.State;
try
{
if (initialState == ConnectionState.Closed)
connection.Open();
connection.Execute("mysp", commandType: CommandType.StoredProcedure);
}
catch
{
throw;
}
finally
{
if (initialState == ConnectionState.Closed)
connection.Close();
}
}
This was working fine. After I migrated the code, I'm getting the following exception:
BeginExecuteNonQuery requires the command to have a transaction when the connection assigned to the command is in a pending local transaction. The Transaction property of the command has not been initialized.
So, just before calling Execute I added:
var ct = dataContext.GetDatabase().CurrentTransaction;
var tr = ct.UnderlyingTransaction;
And passed the transaction to Execute. Alas, CurrentTransaction is null, so the above change can't be used.
So then I tried to create a new transaction by doing:
using var tr = dataContext.GetDatabase.BeginTransaction();
And this second change throws a different exception complaining that SqlConnection cannot use parallel transactions.
So, now I'm in a situation where I originally had no problem to having neither an existing transaction nor can I create a new one.
How can I make Dapper happy again?
How can I make Dapper happy again?
Dapper has no opinion here whatsoever; what is unhappy is your data provider. It sounds like somewhere, somehow, your dataContext has an ADO.NET transaction active on the connection. I can't tell you where, how, or why. But: while a transaction is active on a connection, ADO.NET providers tend to be pretty fussy about having that same transaction explicitly specified on all commands that are executed on the connection. This could be because you are somehow sharing the same connection between multiple threads, or it could simply be that something with the dataContext has an incomplete transaction somewhere.

'Concurrent saves are not allowed' with manager.enableSaveQueuing(true)

Why breeze keeps throwing 'Concurrent saves are not allowed' with manager.enableSaveQueuing(true) option enabled
Simply because you're trying to issue multiple saves at the same time.
Breeze's default save option is queuing data for saving.
In your case, you can overwrite the option for allowing concurrent saves as follows:
var so = new breeze.SaveOptions({allowConcurrentSaves: true})
return manager.saveChanges(null,so)
.then(saveSucceeded) //
.fail(saveFailed);
EDIT
Since you are using the "saveQueuing" plugin, Ignore my first Answer since it only applies to concurrent saves.
I'm not aware of how your code works, but you might take some considerations in case of save queuing:
You can only issue manager.saveChanges() once inside your code.
At the server side, override the BeforeSaveEntity() method for the sake of the mutual-exclusion lock statement in your new savechanges() method, your code may look something like this:
public void SaveChanges(SaveWorkState saveWorkState)
{
lock (__lock) // this will block any try to issue concurrent saves on the same row
{
// Saving Operations goes here
}
}
You might want to look at it in the NoDB Sample.

SqlServer SMO Scripting.ScriptingError event handler not firing

I have a SQL Sever 2008 R2/64-bit database server for which I'm writing some fairly basic scripting utilities with the Sql Server Management Objects (SMO). My project is a 32-bit VS2010 executable written in C#.
Most of the effort has been fairly simple and successful. The only problem I'm having is in the firing of my custom event handler that should be called in response to a Scripting Error.
The Scripter object exposes a ScriptingError event, which I have attempted to leverage thusly:
//srv contains a valid server name
Scripter scrp = new Scripter(srv);
//scrp_ScriptingError is my handler
scrp.ScriptingError += new ScriptingErrorEventHandler(scrp_ScriptingError);
My handler is declared thusly:
static void scrp_ScriptingError(object sender, ScriptingErrorEventArgs e)
{
// my handler goes here, just printing e.Current.Urn to the console
// This is merely representative, have had other things here, but
// the handler never fires
Console.WriteLine(e.Current.Value);
}
All this compiles cleanly.
My code is invoked via simple scrp.Script(urns); where urns is just an array of the database objects being scripted out. Nothing fancy:
try
{
sc = scripter.Script(urns);
}
catch (Exception e)
{
WriteLog(String.Format("Failure during scripting: {0}: Inner exception message (if any): {1}",e.Message,((e.InnerException==null)?"[None]":e.InnerException.Message)));
}
using (System.IO.StreamWriter file = new System.IO.StreamWriter(fileName,true))
{
foreach(String currentLine in sc)
{
file.WriteLine(currentLine);
file.WriteLine("GO");
}
}
The problem is that, no matter what I've tried so far, when errors occur during scripting, my ScriptingError handler never fires.
Even in debug mode within VS2010, when I set a breakpoint within the handler, and fire my scripting code, and know an error will occur, only an exception will be thrown, but the breakpoint in my ScriptingError handler never trips.
I'm in trees-for-forest mode now, not sure what I've done wrong. Am I expecting the wrong things for the ScriptingError handler?
I have searched the MSDN docs on the SMO objects and found virtually nothing on ScriptingError handlers other than the basic API calls themselves, and precious few examples on the Internet. It seems incredibly simple and straightforward to me - just assigning an event handler to the event - but there's some battery-not-included notice I've failed to note.
Pointers to my error are greatly appreciated, with a polite request for minimal brickbats if the error is exceptionally stupid on my part :)
I am not at a pc right know, but I think you should try to set the ContinueScriptingOnError option. Otherwise there would be no reason for SMO to invoke the event, but rather through an exception instead.

clogin open cursors java berkeley db

I'm getting this exception of opened cursos on closing some stores on berkeley db:
Exception in thread "main" java.lang.IllegalStateException: Database still has 1 open cursors while trying to close.
at com.sleepycat.je.Database.closeInternal(Database.java:462)
at com.sleepycat.je.Database.close(Database.java:314)
at com.sleepycat.persist.impl.Store.closeDb(Store.java:1449)
at com.sleepycat.persist.impl.Store.close(Store.java:1058)
at com.sleepycat.persist.EntityStore.close(EntityStore.java:626)
This error occours "on myStore.close()":
public void close() throws DatabaseException {
myStore.close();
myDB.close();
env.close();
}
But I didn't manually open any cursor.
I've looked for this error, and I didn't find anything special I'd had to do (because I didn't open the cursor manually).
So I think I did something wrong on opening the database. What I do on opening the store:
myStore = new EntityStore(env, "StoreTest", storeConfig);
PrimaryIndex<Long, MYClass> myPrimaryIndex = myStore.getPrimaryIndex(Long.class, MyClass.class);
Again: I didn't manually open any cursor.
Any call to EntityIndex.entities() opens a cursor, whether you assign it to a variable or not. So make sure, that you assign it to an EntityCursor object, and call it's close() method afterwards, like this:
EntityCursor<Employee> cursor = primaryIndex.entities();
try {
...
} finally {
cursor.close();
}
I have had a similar problem, and this was the solution, also posted on OTN forums here:
https://forums.oracle.com/forums/thread.jspa?messageID=10241239

Try Catch block in Siebel

I have a script which sends a set of records into a file. I'm using Try - Catch block to handle the exceptions. In the catch block I have a code where it has the pointer to next record. But this is not executing . Basically I wan to skip the bad record n move to next record.
while(currentrecord)
{
try
{
writerecord event
}
catch
{
currentrecord = next record
}
}
In most languages (unless you're using something very strange), If 'writerecord event' doesn't throw an exception, the catch block will not be called.
Don't you mean :
while(currentrecord) {
try { writerecord event }
catch { log error }
finally { currentrecord = next record}
}
Are you trying to loop through some records that are returned by a query? Do something like this:
var yourBusObject = TheApplication().GetBusObject("Your Business Object Name");
var yourBusComp = yourBusObject.GetBusComp("Your Business Component Name");
// activate fields that you need to access or update here
yourBusComp.ClearToQuery();
// set search specs here
yourBusComp.ExecuteQuery(ForwardOnly);
if (yourBusComp.FirstRecord()) {
do {
try {
// update the fields here
yourBusComp.WriteRecord();
} catch (e) {
// undo any changes so we can go to the next record
// If you don't do this I believe NextRecord() will implicitly save and trigger the exception again.
yourBusComp.UndoRecord();
// maybe log the error here, or just ignore it
}
} while (yourBusComp.NextRecord());
}
You can use try-finally structure so that whatever inside the finally block will always be executed, regardless of whether the code throws an exception or not. It's often used to clean up resources such as closing files or connections. Without a catch clause, any thrown exception in your try block will abort execution, jump to your finally block and run that code.
Agree that 'finally' might be the best bet here - but do we know what the exception actually is ? - can you output it in your catch loop, so that :
A) you can prove an exception is being thrown (rather than say a 'null' being returned or something)
B) Make sure the exception you get isn't something that could prevent 'nextrecord' working as well...[not sure what the 'finally' would achieve in the case - presumably the exception would have to bubble up to calling code?
So you're trying to move onto the next record if you failed to commit this one. Robert Muller had it right. To explain...
If the WriteRecord fails, then the business component will still be positioned on the dirty record. Attempting to move to the next record will make the buscomp try to write it again--because of a feature called "implicit saving".
Solution: You'll have to undo the record (UndoRecord) to abandon your failing field changes before moving onto the next one.

Resources