Does SQLite support transactions across multiple databases? - database

I've done some searching and also read the FAQ on the SQLite site, no luck finding an answer to my question.
It could very well be that my database approach is flawed, but at the moment, I would like to store my data in multiple SQLite3 databases, so that means separate files. I am very worried about data corruption due to my application possibly crashing, or a power outage in the middle of changing data in my tables.
In order to ensure data integrity, I basically need to do this:
begin transaction
modify table(s) in database #1
modify table(s) in database #2
commit, or rollback if error
Is this supported by SQLite? Also, I am using sqlite.net, specifically the latest which is based on SQLite 3.6.23.1.
UPDATE
One more question -- is this something people would usually add to their unit tests? I always unit test databases, but have never had a case like this. And if so, how would you do it? It's almost like you have to pass another parameter to the method like bool test_transaction, and if it's true, throw an exception between database accesses. Then test after the call to make sure the first set of data didn't make it into the other database. But maybe this is something that's covered by the SQLite tests, and should not appear in my test cases.

Yes transactions works with different sqlite database and even between sqlite and sqlserver. I have tried it couple of times.
Some links and info
From here - Transaction between different data sources.
Since SQLite ADO.NET 2.0 Provider supports transaction enlistement, not only it is possible to perform a transaction spanning several SQLite datasources, but also spanning other database engines such as SQL Server.
Example:
using (DbConnection cn1 = new SQLiteConnection(" ... ") )
using (DbConnection cn2 = new SQLiteConnection(" ... "))
using (DbConnection cn3 = new System.Data.SqlClient.SqlConnection( " ... ") )
using (TransactionScope ts = new TransactionScope() )
{
cn1.Open(); cn2.Open(); cn3.Open();
DoWork1( cn1 );
DoWork2( cn2 );
DoWork3( cn3 );
ts.Complete();
}
How to attach a new database:
SQLiteConnection cnn = new SQLiteConnection("Data Source=C:\\myfirstdatabase.db");
cnn.Open();
using (DbCommand cmd = cnn.CreateCommand())
{
cmd.CommandText = "ATTACH DATABASE 'c:\\myseconddatabase.db' AS [second]";
cmd.ExecuteNonQuery();
cmd.CommandText = "SELECT COUNT(*) FROM main.myfirsttable INNER JOIN second.mysecondtable ON main.myfirsttable.id = second.mysecondtable.myfirstid";
object o = cmd.ExecuteScalar();
}

Yes, SQLite explicitly supports multi-database transactions (see https://www.sqlite.org/atomiccommit.html#_multi_file_commit for technical details). However, there is a fairly large caveat. If the database file is in WAL mode, then:
Transactions that involve changes against multiple ATTACHed databases
are atomic for each individual database, but are not atomic across all
databases as a set.

Related

Access 2013: could I use a data macro to trigger an MS SQL SP without delaying the execution in Access?

My client uses a multi-user split Access database (ie back end DB on server, client DB on each PC) as a point of sale system as well as for management functions.
He now needs it to update a remote MS SQL database but cannot afford to slow down the Access clients as customers are waiting. If I add code to each update / append / delete in the access client DBs to run the SQL SP it would slow down each transaction too much (I have tried that).
I am wondering whether I could use trigger macros on the back-end Access DB to run the SQL SPs without slowing down the client DB. Would the client DB have to wait for the trigger macro to run before it resumed its work, or would this be a good way to disconnect the client from an SQL update that is taking place on the server?
I have never used trigger macros and it is going to be a lot of work to research and create these on each table in order to test it so, if anyone can answer the above it could save me many hours of (possibly wasted) work!
I am wondering whether I could use trigger macros on the back-end Access DB to run the SQL SPs
Not directly, no. Event-driven Data Macros in Access cannot execute external code (VBA routines, pass-through queries, etc.) and cannot operate on linked tables.
What you could do is use event-driven Data Macros on the main table(s) to gather the information required by the stored procedures and save it in separate "queuing" table(s) in the back-end, then run a scheduled task every few minutes (or so) to call the stored procedures and pass them the "queued" information.
I simpler solution might be to use a linked table to the SQL db, then create and call a VBA function that build and executes the update query using. Using the SQLPassthough on the Linked table. This allows the function to return immediately, avoids maintenance and overhead in the SQL db, as well as all the initial setup time. Also the linked table can retain all connection including the username and password, if desired.
STUB of DAO SQLPassthrough call from Data Macro
VBA Function (generic code)
Public Function SetTimeStamp(ByVal RecordID as Variant)
Dim strSQL As String
Dim db As DAO.Database
strSQL = "UPDATE tblName SET tblName.TimeStampField = " & Now()
strSQL = strSQL & " WHERE RecordIDField = " & RecordID
Set db = CurrentDB()
db.execute strSQL, dbSQLpassThrough + dbRunAsync
End Function
To implement in the After Update event, use the SetLocalVar to call the Function
Data Macro (generic code)
If Updated("MyField1") or Updated("MyField2") or Updated("MyField3")
SetLocalVar
Name varTemp
Expression =SetTimeStamp([TableName].[RecordIDField]
End If
This will cause the function to Execute. It in turn run the query using the SQLPassThrough and Asynchroneous options, which causes Zero slowdown to the Access app. It can easily be modified to pass in the Table Name and TimeStamp Field names as parameters, so that may be used on any table, write more fields, etc.
I find the advantage to a single Access function, is that if you decide to enhance it, you only need to add the Data field(s) to your tables and fix a single function. Also there are no scheduled tasks, or queues to maintain, making this a cleaner solution.
Art

Batch insert with EclipseLink not working

I am using EclipseLink-2.6.1 with Amazon RDS database instance. The following code is used to insert new entities into database:
tx = em.getTransaction();
tx.begin();
for (T item : persistCollection) {
em.merge(item);
}
tx.commit();
The object which is being persisted has composite primary key (not a generated one). Locally, queries run super fast, but when inserting into remote DB it is really slow process (~20 times slower). I have tried to implement JDBC batch writing but had no success with it (eclipselink.jdbc.batch-writing and rewriteBatchedStatements=true). When logging queries which are being executed I only see lots of SELECTS and not one INSERT (SELECTS are probably here because objects are detached at first).
My question is how to proceed on this problem? (I would like to have batch writing and then see how the performance changes, but any help is appreciated)
Thank you!
Edit:
When using em.persist(item) loop is finished almost instantly but after tx.commit() there are lots (I guess for every persisted item) queries like :
[EL Fine]: sql: ServerSession(925803196) Connection(60187547) SELECT NAME FROM TICKER WHERE (NAME = ?), bind => [AA]
My model has #ManyToOne relationship with ticker_name. Why are there again so many slow SELECT queries?

How to use distributed transactions JTA using plain jdbc

I have no Idea about JTA,to understand the overall scenarios please follow this link How to maintain acid property of 3 sequential transaction of three diffrenet databases ,However based on suggestions from the post,I have to use Distributed transactions. I am using apache-tomcat server.
But As I said i have no idea about JTA, So my problem is that, I have more than 15 database connection, and based on the some condition, their respective database is connected. So I can't create hibernate.cfg.xml and session factories and entities for each databases .
So My question is that, can i use JTA with plain jdbc? ,and if possible then provide me some links or examples.
Yes . You can use JTA with plain JDBC . The general idea is that instead of using JDBC Connection object to declare the transaction boundary , you use the Transaction Manager object which is provided by the JTA implementation to declare the transaction boundary .
For example , in the case of Bitronix Transaction Manager , declaring a transaction boundary across many database Connection can be done by the following codes:
PoolingDataSource derbyDataSource1 = new PoolingDataSource();
derbyDataSource1.setClassName("org.apache.derby.jdbc.EmbeddedXADataSource");
derbyDataSource1.setUniqueName("derby1");
derbyDataSource1.getDriverProperties().setProperty("databaseName", "database1");
derbyDataSource1.init();
PoolingDataSource derbyDataSource2= new PoolingDataSource();
derbyDataSource2.setClassName("org.apache.derby.jdbc.EmbeddedXADataSource");
derbyDataSource2.setUniqueName("derby2");
derbyDataSource2.getDriverProperties().setProperty("databaseName", "database2");
derbyDataSource2.init();
BitronixTransactionManager btm = TransactionManagerServices.getTransactionManager();
btm.begin();
try {
Connection c1= derbyDataSource1.getConnection();
Connection c2= derbyDataSource2.getConnection();
/***Use c1 and c2 to execute statements again their corresponding DBs as usual**/
btm.commit();
} catch (SQLException ex) {
ex.printStackTrace();
btm.rollback();
}

How to use SQL Server table hints while using LINQ?

What is the way to use Sql Server's table hints like "NOLOCK" while using LINQ?
For example I can write "SELECT * from employee(NOLOCK)" in SQL.
How can we write the same using LINQ?
Here's how you can apply NOLOCK: http://www.hanselman.com/blog/GettingLINQToSQLAndLINQToEntitiesToUseNOLOCK.aspx
(Quote for posterity, all rights reserved by mr scott):
ProductsNewViewData viewData = new ProductsNewViewData();
using (var t = new TransactionScope(TransactionScopeOption.Required,
new TransactionOptions {
IsolationLevel = System.Transactions.IsolationLevel.ReadUncommitted
}))
{
viewData.Suppliers = northwind.Suppliers.ToList();
viewData.Categories = northwind.Categories.ToList();
}
I STRONGLY recommend reading about SQL Server transaction isolation modes before using ReadUncommitted.
Here's a very good read
http://blogs.msdn.com/b/davidlean/archive/2009/04/06/sql-server-nolock-hint-other-poor-ideas.aspx
In many cases level ReadSnapshot should suffice. Also you if You really need it You can set default transaction isolation level for Your database by using
Set Transaction Isolation Level --levelHere
Other good ideas include packaging Your context in a wrapper that encapsulates each call using demanded isolation level. (maybe You need nolock 95% of the time and serializable 5% of the time). It can be done using using extension methods, or normal methods by code like:
viewData.Categories = northwind.Categories.AsReadCommited().ToList();
Which takes your IQueryable and does the trick mentioned by Rob.
Hope it helps
Luke

Which ORM framework can best handle an MVCC database design?

When designing a database to use MVCC (Multi-Version Concurrency Control), you create tables with either a boolean field like "IsLatest" or an integer "VersionId", and you never do any updates, you only insert new records when things change.
MVCC gives you automatic auditing for applications that require a detailed history, and it also relieves pressure on the database with regards to update locks. The cons are that it makes your data size much bigger and slows down selects, due to the extra clause necessary to get the latest version. It also makes foreign keys more complicated.
(Note that I'm not talking about the native MVCC support in RDBMSs like SQL Server's snapshot isolation level)
This has been discussed in other posts here on Stack Overflow. [todo - links]
I am wondering, which of the prevalent entity/ORM frameworks (Linq to Sql, ADO.NET EF, Hibernate, etc) can cleanly support this type of design? This is a major change to the typical ActiveRecord design pattern, so I'm not sure if the majority of tools that are out there could help someone who decides to go this route with their data model. I'm particularly interested in how foreign keys would be handled, because I'm not even sure of the best way to data model them to support MVCC.
I might consider implementing the MVCC tier purely in the DB, using stored procs and views to handle my data operations. Then you could present a reasonable API to any ORM that was capable of mapping to and from stored procs, and you could let the DB deal with the data integrity issues (since it's pretty much build for that). If you went this way, you might want to look at a more pure Mapping solution like IBatis or IBatis.net.
I designed a database similarly (only INSERTs — no UPDATEs, no DELETEs).
Almost all of my SELECT queries were against views of only the current rows for each table (highest revision number).
The views looked like this…
SELECT
dbo.tblBook.BookId,
dbo.tblBook.RevisionId,
dbo.tblBook.Title,
dbo.tblBook.AuthorId,
dbo.tblBook.Price,
dbo.tblBook.Deleted
FROM
dbo.tblBook INNER JOIN
(
SELECT
BookId,
MAX(RevisionId) AS RevisionId
FROM
dbo.tblBook
GROUP BY
BookId
) AS CurrentBookRevision ON
dbo.tblBook.BookId = CurrentBookRevision.BookId AND
dbo.tblBook.RevisionId = CurrentBookRevision.RevisionId
WHERE
dbo.tblBook.Deleted = 0
And my inserts (and updates and deletes) were all handled by stored procedures (one per table).
The stored procedures looked like this…
ALTER procedure [dbo].[sp_Book_CreateUpdateDelete]
#BookId uniqueidentifier,
#RevisionId bigint,
#Title varchar(256),
#AuthorId uniqueidentifier,
#Price smallmoney,
#Deleted bit
as
insert into tblBook
(
BookId,
RevisionId,
Title,
AuthorId,
Price,
Deleted
)
values
(
#BookId,
#RevisionId,
#Title,
#AuthorId,
#Price,
#Deleted
)
Revision numbers were handled per-transaction in the Visual Basic code…
Shared Sub Save(ByVal UserId As Guid, ByVal Explanation As String, ByVal Commands As Collections.Generic.Queue(Of SqlCommand))
Dim Connection As SqlConnection = New SqlConnection(System.Configuration.ConfigurationManager.ConnectionStrings("Connection").ConnectionString)
Connection.Open()
Dim Transaction As SqlTransaction = Connection.BeginTransaction
Try
Dim RevisionId As Integer = Nothing
Dim RevisionCommand As SqlCommand = New SqlCommand("sp_Revision_Create", Connection)
RevisionCommand.CommandType = CommandType.StoredProcedure
RevisionCommand.Parameters.AddWithValue("#RevisionId", 0)
RevisionCommand.Parameters(0).SqlDbType = SqlDbType.BigInt
RevisionCommand.Parameters(0).Direction = ParameterDirection.Output
RevisionCommand.Parameters.AddWithValue("#UserId", UserId)
RevisionCommand.Parameters.AddWithValue("#Explanation", Explanation)
RevisionCommand.Transaction = Transaction
LogDatabaseActivity(RevisionCommand)
If RevisionCommand.ExecuteNonQuery() = 1 Then 'rows inserted
RevisionId = CInt(RevisionCommand.Parameters(0).Value) 'generated key
Else
Throw New Exception("Zero rows affected.")
End If
For Each Command As SqlCommand In Commands
Command.Connection = Connection
Command.Transaction = Transaction
Command.CommandType = CommandType.StoredProcedure
Command.Parameters.AddWithValue("#RevisionId", RevisionId)
LogDatabaseActivity(Command)
If Command.ExecuteNonQuery() < 1 Then 'rows inserted
Throw New Exception("Zero rows affected.")
End If
Next
Transaction.Commit()
Catch ex As Exception
Transaction.Rollback()
Throw New Exception("Rolled back transaction", ex)
Finally
Connection.Close()
End Try
End Sub
I created an object for each table, each with constructors, instance properties and methods, create-update-delete commands, a bunch of finder functions, and IComparable sorting functions. It was a huge amount of code.
One-to-one DB table to VB object...
Public Class Book
Implements iComparable
#Region " Constructors "
Private _BookId As Guid
Private _RevisionId As Integer
Private _Title As String
Private _AuthorId As Guid
Private _Price As Decimal
Private _Deleted As Boolean
...
Sub New(ByVal BookRow As DataRow)
Try
_BookId = New Guid(BookRow("BookId").ToString)
_RevisionId = CInt(BookRow("RevisionId"))
_Title = CStr(BookRow("Title"))
_AuthorId = New Guid(BookRow("AuthorId").ToString)
_Price = CDec(BookRow("Price"))
Catch ex As Exception
'TO DO: log exception
Throw New Exception("DataRow does not contain valid Book data.", ex)
End Try
End Sub
#End Region
...
#Region " Create, Update & Delete "
Function Save() As SqlCommand
If _BookId = Guid.Empty Then
_BookId = Guid.NewGuid()
End If
Dim Command As SqlCommand = New SqlCommand("sp_Book_CreateUpdateDelete")
Command.Parameters.AddWithValue("#BookId", _BookId)
Command.Parameters.AddWithValue("#Title", _Title)
Command.Parameters.AddWithValue("#AuthorId", _AuthorId)
Command.Parameters.AddWithValue("#Price", _Price)
Command.Parameters.AddWithValue("#Deleted", _Deleted)
Return Command
End Function
Shared Function Delete(ByVal BookId As Guid) As SqlCommand
Dim Doomed As Book = FindByBookId(BookId)
Doomed.Deleted = True
Return Doomed.Save()
End Function
...
#End Region
...
#Region " Finders "
Shared Function FindByBookId(ByVal BookId As Guid, Optional ByVal TryDeleted As Boolean = False) As Book
Dim Command As SqlCommand
If TryDeleted Then
Command = New SqlCommand("sp_Book_FindByBookIdTryDeleted")
Else
Command = New SqlCommand("sp_Book_FindByBookId")
End If
Command.Parameters.AddWithValue("#BookId", BookId)
If Database.Find(Command).Rows.Count > 0 Then
Return New Book(Database.Find(Command).Rows(0))
Else
Return Nothing
End If
End Function
Such a system preserves all past versions of each row, but can be a real pain to manage.
PROS:
Total history preserved
Fewer stored procedures
CONS:
relies on non-database application for data integrity
huge amount of code to be written
No foreign keys managed within database (goodbye automatic Linq-to-SQL-style object generation)
I still haven't come up with a good user interface to retrieve all that preserved past versioning.
CONCLUSION:
I wouldn't go to such trouble on a new project without some easy-to-use out-of-the-box ORM solution.
I'm curious if the Microsoft Entity Framework can handle such database designs well.
Jeff and the rest of that Stack Overflow team must have had to deal with similar issues while developing Stack Overflow: Past revisions of edited questions and answers are saved and retrievable.
I believe Jeff has stated that his team used Linq to SQL and MS SQL Server.
I wonder how they handled these issues.
To the best of my knowledge, ORM frameworks are going to want to generate the CRUD code for you, so they would have to be explicitly designed to implement a MVCC option; I don't know of any that do so out of the box.
From an Entity framework standpoint, CSLA doesn't implement persistence for you at all -- it just defines a "Data Adapter" interface that you use to implement whatever persistence you need. So you could set up code generation (CodeSmith, etc.) templates to auto-generate CRUD logic for your CSLA entities that go along with a MVCC database architecture.
This approach would work with any entity framework, most likely, not just CSLA, but it would be a very "clean" implementation in CSLA.
Check out the Envers project - works nice with JPA/Hibernate applications and basically does that for you - keeps track of different versions of each Entity in another table and gives you SVN-like possibilities ("Gimme the version of Person being used 2008-11-05...")
http://www.jboss.org/envers/
/Jens
I always figured you'd use a db trigger on update and delete to push those rows out into a TableName_Audit table.
That'd work with ORMs, give you your history and wouldn't decimate select performance on that table. Is that a good idea or am I missing something?
What we do, is just use a normal ORM ( hibernate ) and handle the MVCC with views + instead of triggers.
So, there is a v_emp view, which just looks like a normal table, you can insert and update into it fine, when you do this though, the triggers handle actually inserting the correct data into the base table.
Not.. I hate this method :) I'd go with a stored procedure API as suggested by Tim.

Resources