I'm using a C program with sqlite3.
Some times insert works.But sometime ,its not working.
assert(retval == SQLITE3_OK)
gives error. while debugging I found retval value of sqlite3_step() is error code = 5
which refers to Database file is busy
Even closing with sqlite3_close() return error code 5.
Any thoughts on how to close the database connection ,even when it's busy?
You could be in the middle of a transaction. Try checking http://www.sqlite.org/c3ref/get_autocommit.html for explanations on how sqlite3_get_autocommit() works. It should return a zero if it is not in auto-commit, meaning there is an open transaction.
Or if you suspect your database might still be working on something, you can use sqlite3_busy_timeout() to set a timer. http://www.sqlite.org/c3ref/busy_timeout.html
Related
In my SSIS Script Component Task (NOT the Script task, which can be added to a Control Flow, but the Script Component Task, which is added within Data Flows), I added some error handling in the catch part of a try, catch block:
// No connection was created. Exit gracefully
bool cancelOnError = false;
ComponentMetaData.FireError(ErrorCode: 10, SubComponent: "SubComponent",
Description: "Couldn't set up the connection. This could be because an invalid host was
provided, or due to a firewall blocking the connection.", HelpFile: "", HelpContext: 0,
pbCancel: out cancelOnError);
This all works fine, with the script task catching the error I raise in my code. I can also see the error in the logs:
Error: 0xA at DFT Extract, SubComponent: Couldn't set up the connection.
This could be because an invalid host was provided, or due to a firewall
blocking the connection.
However, the icon on on the task is green, and a subsequent task I defined to process results gets fired (there is nothing to process, though, as this particular error occurs prior to processing any data):
The Data Flow correctly shows a red cross icon. Is there any way I can change the icon on the Script Component, or a better way for me to (elegantly) simulate a showstopping error?
I found this page on Microsoft, showing the difference between Script Task and Script Components, which also states:
The Script component runs as a part of the Data Flow task and does not
report results using either of these properties.
That doesn't have me very hopeful, but I am hoping someone might have a work around. I primarily think that showing the green icon is somewhat misleading when we trap an error.
Why not just changing the cancelonerror value to true
bool cancelOnError = true;
Or you can use Throw exception method
throw new Exception("Couldn't set up the connection. This could be because an invalid host was
provided, or due to a firewall blocking the connection.");
Workaround
If your goal is to ignore rows that was not processed correctly, the best way is to add a Column to the OutputBuffer (of type DT_BOOL), and set this column value to False if there is an error , else it's value must be True.
And add a conditional split after the script component to filter rows based on this Column. With a similar expression:
[FlagColumn] == True
I want to create to function. The first one is connect to DB, the second one is complete reconnection if first is failed.
In my experiment I turn off DB at start to get connect block failed and call reconnect block. After it I am turning on DB, and expecting that connection block will success, but I am getting exception.
Here is my code:
bool connect()
{
if(connection is null)
{
scope(failure) reconnect(); // call reconnect if fail
this.connection = mydb.lockConnection();
writeln("connection done");
return true;
}
else
return false;
}
void reconnect()
{
writeln("reconnection block");
if(connection is null)
{
while(!connect) // continue till connection will not be established
{
Thread.sleep(3.seconds);
connectionsAttempts++;
logError("Connection to DB is not active...");
logError("Reconnection to DB attempt: %s", connectionsAttempts);
connect();
}
if(connection !is null)
{
logWarn("Reconnection to DB server done");
}
}
}
The log (turning on DB after few seconds):
reconnection block
reconnection block
connection done
Reconnection to DB server done
object.Exception#C:\Users\Dima\AppData\Roaming\dub\packages\vibe-d-0.7.30\vibe-d\source\vibe\core\drivers\libevent2.d(326): Failed to connect to host 194.87.235.42:3306: Connection timed out [WSAETIMEDOUT ]
I can't understand why I am getting exception after: Reconnection to DB server done
There's two main problems here.
First of all, there shouldn't be any need for automatic retry attempts at all. If it didn't work the first time, and you don't change anything, there's no reason doing the same exact thing should suddenly work the second time. If your network is that unreliable, then you have much bigger problems.
Secondly, if you are going to automatically retry anyway, that's code's not going to work:
For one thing, reconnect is calling connect TWICE on every failure: Once at the end of the loop body and then immediately again in the loop condition regardless of whether the connection succeeded. That's probably not what you intended.
But more importantly, you have a potentially-infinite recursion going on there: connect calls reconnect if it fails. Then reconnect calls connect up to six times, each of those times connect calls reconnect AGAIN on failure, looping forever until the connection configuration that didn't work somehow magically starts working (or perhaps more likely, until you blow the stack and crash).
Honestly, I'd recommend simply throwing that all away: Just call lockConnection (if you're using vibe.d) or new Connection(...) (if you're not using vibe.d) and be done with it. If your connection settings are wrong, then trying the same connection settings again isn't going to fix them.
lockConnection -- Is there supposed to be a matching "unlock"? – Rick James
No, the connection pool in question comes from vibe.d. When the fiber which locked the connection exits (usually meaning "when your server is done processing a request"), any connections the fiber locked automatically get returned to the pool.
I have created an application that calls SQLDriverConnect to connect to a MS SQL Server database called 'MyDB'. After doing some things, it calls SQLDisconnect.But then SSMS fails to delete 'MyDB'. This means some resources are not closed properly. Only after exiting the process, does SSMS delete it (i.e. the OS releases them) and all SQLHENV and SQLHDBC are released properly.
Code below:
SMARTHSTMT::~SMARTHSTMT()
{
if (!m_hstmt) return;
SQLFreeStmt(m_hstmt, SQL_CLOSE);
SQLFreeStmt(m_hstmt, SQL_UNBIND);
SQLFreeStmt(m_hstmt, SQL_RESET_PARAMS);
SQLFreeHandle(SQL_HANDLE_STMT, m_hstmt);
m_hstmt = nullptr;
};
How can I find which object is not released? Is there any other considerations should I take? any idea or help appreciated.
Edit: code for disconnecting:
void AConnection::uDisconnect()
{
if (m_hdbc)
{
SQLDisconnect(m_hdbc);
SQLFreeHandle(SQL_HANDLE_DBC, m_hdbc);
m_hdbc = nullptr;
}
if (m_henv)
{
SQLFreeHandle(SQL_HANDLE_ENV, m_henv);
m_henv = nullptr;
}
}
You can check if SQLDisconnect() returns SQL_ERROR. If that is the case, a statement might still be open, or a transaction (as you detected) is still open.
Transaction Handling in ODBC is (simplified) like this:
By default auto-commit is enabled. Everything starts a new transaction and if the statement succeeds the transaction is committed. If you have not changed the commit-mode, its confusing for me that a transaction is still open.
If you have disabled auto-commit, you must manually call SQLEndTrans(...) to commit or rollback any ongoing transaction. As far as I know, there is no way in ODBC to query the driver if any transaction is still open.
As you mention the calls to SQLEndTrans(), I guess you have already disabled auto-commit. Looking at my sources, I see that I always do a Rollback before closing a connection-handle - maybe because of the same problem, I dont remember exactly (its old code).
Anyway, if you have enabled manual commit mode, I would just recommend that you do a Rollback before closing the connection handle. Maybe there would be tools on the SQL Server Side to analyze more details about what is exactly open at that time..
See here for more details: https://msdn.microsoft.com/en-us/library/ms131281.aspx
Here is a description of my problem:
I have 2 threads in my program. One is the main thread and the other one that i create using pthread_create
The main thread performs various functions on an sqlite3 database. Each function opens to perform the required actions and closing it when done.
The other thread simply reads from the database after a set interval of time and uploads it onto a server. The thread also opens and closes the database to perform its operation.
The problem occurs when both threads happen to open the database. If one finishes first, it closes the database thus causing the other to crash making the application unusable.
Main requires the database for every operation.
Is there a way I can prevent this from happening? Mutex is one way but if I use mutex it will make my main thread useless. Main thread must remain functional at all times and the other thread runs in the background.
Any advice to make this work would be great.
I did not provide snippets as this problem is a bit too vast for that but if you do not understand anything about the problem please do let me know.
EDIT:
static sqlite3 *db = NULL;
Code snippet for opening database
int open_database(char* DB_dir) // argument is the db path
rc = sqlite3_open(DB_dir , &db);
if( rc )
{
//failed to open message
sqlite3_close(db);
db = NULL;
return SDK_SQL_ERR;
}
else
{
//success message
}
}
return SDK_OK;
}
And to close db
int close_database()
{
if(db!=NULL)
{
sqlite3_close(db);
db = NULL;
//success message
}
return 1;
}
EDIT: I forgot to add that the background thread performs one single write operation that updates 1 field of the table for each row it uploads onto the server
Have your threads each use their own database connection. There's no reason for the background thread to affect the main thread's connection.
Generally, I would want to be using connection pooling, so that I don't open and close database connections very frequently; connection opening is an expensive operation.
In application servers we very often have many threads, we find that a connection pool of a few tens of connections is sufficient to service requests on behalf of many hundreds of users.
Basically built into sqlite3 there are mechanisms to provide locking... BEGIN EXCLUSIVE then you can also register a sleep callback so that the other thread can do other things...
see sqlite3_busy_handler()
I was importing ttl ontologies to dbpedia following the blog post http://michaelbloggs.blogspot.de/2013/05/importing-ttl-turtle-ontologies-in-neo4j.html. The post uses BatchInserters to speed up the task. It mentions
Batch insertion is not transactional. If something goes wrong and you don't shutDown() your database properly, the database becomes inconsistent.
I had to interrupt one of the batch insertion tasks as it was taking time much longer than expected which left my database in an inconsistence state. I get the following message:
db_name store is not cleanly shut down
How can I recover my database from this state? Also, for future purposes is there a way for committing after importing every file so that reverting back to the last state would be trivial. I thought of git, but I am not sure if it would help for a binary file like index.db.
There are some cases where you cannot recover from unclean shutdowns when using the batch inserter api, please note that its package name org.neo4j.unsafe.batchinsert contains the word unsafe for a reason. The intention for batch inserter is to operate as fast as possible.
If you want to guarantee a clean shutdown you should use a try finally:
BatchInserter batch = BatchInserters.inserter(<dir>);
try {
} finally {
batch.shutdown();
}
Another alternative for special cases is registering a JVM shutdown hook. See the following snippet as an example:
BatchInserter batch = BatchInserters.inserter(<dir>);
// do some operations potentially throwing exceptions
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
batch.shutdown();
}
});