Upgrade database by using onUpgrade callback properly - database

I'm fetching data from a website (JSON) and save it in a database. This works well. However, I want to upgrade the database with a new version number every time the data fetched from the internet changes. For that, I think the best way is to drop the table, upgrade the database number version and recreate the table before filling it with the new data (at least there is a way to insert only new records and update only the old ones that have changed).
I saw that it could be done with the onUpgrade callback on a Database from the sqflite plugin.
So, I created an Helper with an init() method that open the database.
However, I don't understand when the onUpgrade callback is called. Indeed, in this code below, version is always at 1.
I would like to have a method that init the DB and :
creates it if it does not exists OR
open the current version if no number version is specified (something like that) OR
upgrade to a new version by auto-incrementing the version number (by calling database.upgrade() for example).
Do you think this is possible in a unique method or do I need to split that in two methods ? If yes, what's the point of the onUpgrade callback ?
class DBHelper {
// Private constructor
DBHelper._privateConstructor();
// Get an instance of DBHelper
static final DBHelper _dbHelper = DBHelper._privateConstructor();
// Getter to get the instance of the DBHelper
factory DBHelper() => _dbHelper;
static Database _database;
Future<Database> get database async {
if (_database != null) return _database;
// lazily instantiate the db the first time it is accessed
_database = await init();
return _database;
}
Future<Database> init() async {
print("DBHelper: init database");
// Get a location using path_provider
String path = await getDBPath();
// I THINK ALL HAPPENS HERE
return await openDatabase(path, version: 1, onCreate: _onCreate, onUpgrade: _onUpgrade);
}
void _onCreate(Database db, int version) async {
print("DBHelper: _onCreate called");
// When creating the db, create the table
_createTable();
}
void _onUpgrade(Database db, int oldVersion, int newVersion) async{
print("DBHelper: _onUpgrade called");
try {
await db.transaction((Transaction txn) async {
await txn.execute("DROP TABLE TABLE_NAME");
});
} catch (e) {
print("Error : " + e.toString());
}
_createTable();
}
void _createTable() async {
Database db = await database;
try {
await db.transaction((Transaction txn) async {
await txn.execute("CREATE TABLE TABLE_NAME ("
"TABLE_ID INTEGER PRIMARY KEY AUTOINCREMENT,"
"TABLE_INT INTEGER,"
"TABLE_TEXT TEXT,");");
});
} catch (e) {
print("Error : " + e.toString());
}
}
}
Best

Database versioning in sqflite matches what has been done in Android where the version is a constant for a particular version of your app with a particular schema. onCreate/onUpgrade should typically get called only once in the lifecyle of your application. Anyway it won't get called unless you close and re-open your database. See https://github.com/tekartik/sqflite/blob/master/sqflite/doc/migration_example.md.
So I would say that the user version as it exists now does not match your need. So I would not used this value nor onUpgrade in your scenario. However you could defined your own singleton value (i.e. your own versioning system) and drop/create table in a transaction while the database is open. Nothing prevents you from doing that.

Related

.NET 7 Distributed Transactions issues

I am developing small POC application to test .NET7 support for distributed transactions since this is pretty important aspect in our workflow.
So far I've been unable to make it work and I'm not sure why. It seems to me either some kind of bug in .NET7 or im missing something.
In short POC is pretty simple, it runs WorkerService which does two things:
Saves into "bussiness database"
Publishes a message on NServiceBus queue which uses MSSQL Transport.
Without Transaction Scope this works fine however, when adding transaction scope I'm asked to turn on support for distributed transactions using:
TransactionManager.ImplicitDistributedTransactions = true;
Executable code in Worker service is as follows:
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
int number = 0;
try
{
while (!stoppingToken.IsCancellationRequested)
{
number = number + 1;
using var transactionScope = TransactionUtils.CreateTransactionScope();
await SaveDummyDataIntoTable2Dapper($"saved {number}").ConfigureAwait(false);
await messageSession.Publish(new MyMessage { Number = number }, stoppingToken)
.ConfigureAwait(false);
_logger.LogInformation("Publishing message {number}", number);
_logger.LogInformation("Worker running at: {time}", DateTimeOffset.Now);
transactionScope.Complete();
_logger.LogInformation("Transaction complete");
await Task.Delay(1000, stoppingToken);
}
}
catch (Exception e)
{
_logger.LogError("Exception: {ex}", e);
throw;
}
}
Transaction scope is created with the following parameters:
public class TransactionUtils
{
public static TransactionScope CreateTransactionScope()
{
var transactionOptions = new TransactionOptions();
transactionOptions.IsolationLevel = IsolationLevel.ReadCommitted;
transactionOptions.Timeout = TransactionManager.MaximumTimeout;
return new TransactionScope(TransactionScopeOption.Required, transactionOptions,TransactionScopeAsyncFlowOption.Enabled);
}
}
Code for saving into database uses simple dapper GenericRepository library:
private async Task SaveDummyDataIntoTable2Dapper(string data)
{
using var scope = ServiceProvider.CreateScope();
var mainTableRepository =
scope.ServiceProvider
.GetRequiredService<MainTableRepository>();
await mainTableRepository.InsertAsync(new MainTable()
{
Data = data,
UpdatedDate = DateTime.Now
});
}
I had to use scope here since repository is scoped and worker is singleton so It cannot be injected directly.
I've tried persistence with EF Core as well same results:
Transaction.Complete() line passes and then when trying to dispose of transaction scope it hangs(sometimes it manages to insert couple of rows then hangs).
Without transaction scope everything works fine
I'm not sure what(if anything) I'm missing here or simply this still does not work in .NET7?
Note that I have MSDTC enable on my machine and im executing this on Windows 10
We've been able to solve this by using the following code.
With this modification DTC is actually invoked correctly and works from within .NET7.
using var transactionScope = TransactionUtils.CreateTransactionScope().EnsureDistributed();
Extension method EnsureDistributed implementation is as follows:
public static TransactionScope EnsureDistributed(this TransactionScope ts)
{
Transaction.Current?.EnlistDurable(DummyEnlistmentNotification.Id, new DummyEnlistmentNotification(),
EnlistmentOptions.None);
return ts;
}
internal class DummyEnlistmentNotification : IEnlistmentNotification
{
internal static readonly Guid Id = new("8d952615-7f67-4579-94fa-5c36f0c61478");
public void Prepare(PreparingEnlistment preparingEnlistment)
{
preparingEnlistment.Prepared();
}
public void Commit(Enlistment enlistment)
{
enlistment.Done();
}
public void Rollback(Enlistment enlistment)
{
enlistment.Done();
}
public void InDoubt(Enlistment enlistment)
{
enlistment.Done();
}
This is 10year old code snippet yet it works(im guessing because .NET Core merely copied and refactored the code from .NET for DistributedTransactions, which also copied bugs).
What it does it creates Distributed transaction right away rather than creating LTM transaction then promoting it to DTC if required.
More details explanation can be found here:
https://www.davidboike.dev/2010/04/forcibly-creating-a-distributed-net-transaction/
https://github.com/davybrion/companysite-dotnet/blob/master/content/blog/2010-03-msdtc-woes-with-nservicebus-and-nhibernate.md
Ensure you're using Microsoft.Data.SqlClient +v5.1
Replace all "usings" System.Data.SqlClient > Microsoft.Data.SqlClient
Ensure ImplicitDistributedTransactions is set True:
TransactionManager.ImplicitDistributedTransactions = true;
using (var ts = new TransactionScope(your options))
{
TransactionInterop.GetTransmitterPropagationToken(Transaction.Current);
... your code ..
ts.Complete();
}

Is there a way to check the database version in a flutter app using sqflite?

I am pretty new to flutter and I have some questions about how database version control works for these apps with sqflite.
I have made a database file in my assets folder that is copied onto the device of the user when the app is launched. I am wondering if there is any way to check the current version the user has of the database file, so that it does not have to be copied over every time?
Case:
I make an update to the database file that is shipped with an app update in the google play store.
Can the version of the database file that the user currently has, be checked to determine if a new file should replace the old one?
If so... Can I do this check every time I update my app? Do I have to do it every time the app is opened?
If not.. How is this best handled? What is the best way of handling database updates of files stored on a users device?
SQFlite has already a built-in version management, but it will work only if you have a migration script. For what you've said that's probably not your case. You'll need to manage your version control by hand:
class DbHelper {
static const NEW_DB_VERSION = 2;
static final DbHelper _instance = DbHelper.internal();
factory DbHelper() => _instance;
DbHelper.internal();
Database _db;
Future<Database> get db async {
if (_db != null) {
return _db;
} else {
_db = await initDb();
return _db;
}
}
Future<Database> initDb() async {
final databasesPath = await getDatabasesPath();
final path = join(databasesPath, "database.db");
var db = await openDatabase(path);
//if database does not exist yet it will return version 0
if (await db.getVersion() < NEW_DB_VERSION) {
db.close();
//delete the old database so you can copy the new one
await deleteDatabase(path);
try {
await Directory(dirname(path)).create(recursive: true);
} catch (_) {}
//copy db from assets to database folder
ByteData data = await rootBundle.load("assets/databases/database.db");
List<int> bytes = data.buffer.asUint8List(data.offsetInBytes, data.lengthInBytes);
await File(path).writeAsBytes(bytes, flush: true);
//open the newly created db
db = await openDatabase(path);
//set the new version to the copied db so you do not need to do it manually on your bundled database.db
db.setVersion(NEW_DB_VERSION);
}
return db;
}
}

Datetime default get null after update with entity framework

I have a table with a DATETIME DEFAULT field.
CREATE TABLE People
(
Id INT NOT NULL IDENTITY(1,1) PRIMARY KEY,
Name VARCHAR(100) NOT NULL,
...
DtOccurrence DATETIME DEFAULT getDATE(),
);
Using scaffolding for generate Class and Entitity for Controllers + Views.
Default CRUD working fine, but if I try update a register, [DtOccurrence] get NULL in database.
How fix it? Thanks in advance
Create saving OK
Update only [Name] field send null [DtOccurrence] for database and my auto-generated class dont have this [DtOccurrence] field:
UPDATE:
CONTROLLER Create method
[HttpPost]
[ValidateAntiForgeryToken]
public async Task<IActionResult> Create([Bind("Id,Name")] People people)
{
if (ModelState.IsValid)
{
_context.Add(people);
await _context.SaveChangesAsync();
return RedirectToAction("Edit", "Pessoas", new { people.Id });
}
return View(people);
}
CONTROLLER Edit method
[HttpPost]
[ValidateAntiForgeryToken]
public async Task<IActionResult> Edit(int id, [Bind("Id,Name,")] People people)
{
if (id != people.Id)
{
return NotFound();
}
if (ModelState.IsValid)
{
try
{
_context.Update(people);
await _context.SaveChangesAsync();
}
catch (DbUpdateConcurrencyException)
{
if (!PeopleExists(people.Id))
{
return NotFound();
}
else
{
throw;
}
}
return RedirectToAction(nameof(Index));
}
return View(people);
}
Auto-generated class scaffolding
public partial class Pessoa
{
public Pessoa()
{
public int Id { get; set; }
public string Name { get; set; }
}
}
As mentioned in my comment, while your initial request to provide data to the view was given an entity from the DB Context, the object (Person) you get back in your Update method is not the same entity, and is not associated with your DbContext. It is a deserialized copy. Calling Update with it when it does not contain all fields will result in fields getting set to #null. Calling Update with a detached entity like this from a client is also an attack vector for unauthorized updates to your domain. (Debugging tools /plugins can intercept the call to the server and alter the entity data in any number of ways.)
public async Task<IActionResult> Edit(int id, [Bind("Id,Name,")] People people)
{
if (!ModelState.IsValid)
return View(people);
var dataPeople = await _context.People.SingleAsync(x => x.id == people.id);
dataPeople.name = people.name;
await _context.SaveChangesAsync(); // dataPeople is a tracked entity and will be saved, not people which is acting as a viewmodel.
return RedirectToAction(nameof(Index));
}
Using Update will generate an update statement where all fields on the entity are overwritten. You may decide to pass an incomplete entity to the view, or an incomplete entity back from the view, but EF has no notion of what data is missing because it wasn't provided/changed, vs. what was cleared out so it updates everything. Instead, you should load the entity from the DbContext based on the ID provided (which will error if the ID is not found) then set the properties you want to change on that tracked entity before calling SaveChanges. This ensures that the resulting SQL update statement contains only the columns you want changed.
As a general rule I recommend using view model classes for communicating models between server and client so it is clear what the data being passed around actually is. Passing entities between server and views is an anti-pattern which is prone to performance problems, serialization issues, and both intentional and accidental data corruption.
Additional validations should include making sure the changes are complete/legal, and potentially checking a row version # or last modified date between the passed model and the data loaded from the DB to ensure they match. When the user opened the page they may have gotten version #1 of the record. When they finally submit the form, if the DB returned version #2, it would indicate that someone else modified that row in that time. (Otherwise you are overwriting the changes)

Correct concurrency handling using EF Core 2.1 with SQL Server

I am currently working on an API using ASP.NET Core Web API along with Entity Framework Core 2.1 and a SQL Server database. The API is used to transfer money from two accounts A and B. Given the nature of the B account which is an account that accepts payments, a lot of concurrent requests might be executed at the same moment. As you know if it's not well managed, this can result in some users not seeing their payments arrive.
Having spent days trying to achieve concurrency I can't figure out what the best approach is. For the sake of simplicity I created a test project trying to reproduce this concurrency issue.
In the test project, I have two routes: request1 and request2 each one perform a transfer to the same user the first one have an amount of 10 and the second one is 20. I put a Thread.sleep(10000) on the first one as follows:
[HttpGet]
[Route("request1")]
public async Task<string> request1()
{
using (var transaction = _context.Database.BeginTransaction(System.Data.IsolationLevel.Serializable))
{
try
{
Wallet w = _context.Wallets.Where(ww => ww.UserId == 1).FirstOrDefault();
Thread.Sleep(10000);
w.Amount = w.Amount + 10;
w.Inserts++;
_context.Wallets.Update(w);
_context.SaveChanges();
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
}
}
return "request 1 executed";
}
[HttpGet]
[Route("request2")]
public async Task<string> request2()
{
using (var transaction = _context.Database.BeginTransaction(System.Data.IsolationLevel.Serializable))
{
try
{
Wallet w = _context.Wallets.Where(ww => ww.UserId == 1).FirstOrDefault();
w.Amount = w.Amount + 20;
w.Inserts++;
_context.Wallets.Update(w);
_context.SaveChanges();
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
}
}
return "request 2 executed";
}
After executing request1 and request2 after in a browser, the first transaction is rolled back due to:
InvalidOperationException: An exception has been raised that is likely due to a transient failure. Consider enabling transient error resiliency by adding 'EnableRetryOnFailure()' to the 'UseSqlServer' call.
I can also retry the transaction but isn't there a better way? using locks ?
Serializable, being the most isolated level and the most costly too is as said in the documentation:
No other transactions can modify data that has been read by the current transaction until the current transaction completes.
Which means no other transaction can update data that has been read by another transaction, which is working as intended here since the update in the request2 route wait for the first transaction (request1) to commit.
The problem here is we need to block read by other transactions once the current transaction has read the wallet row, to solve the problem I need to use locking so that when the first select statement in request1 executes, all the transactions after need to wait for the 1st transaction to finish so they can select the correct value. Since EF Core have no support for locking I need to execute a SQL query directly, so when selecting the wallet I'll add a row lock to the current row selected
//this locks the wallet row with id 1
//and also the default transaction isolation level is enough
Wallet w = _context.Wallets.FromSql("select * from wallets with (XLOCK, ROWLOCK) where id = 1").FirstOrDefault();
Thread.Sleep(10000);
w.Amount = w.Amount + 10;
w.Inserts++;
_context.Wallets.Update(w);
_context.SaveChanges();
transaction.Commit();
Now this works perfectly even after executing multiple request the result of the transfers all combined is correct. In addition to that am using a transaction table that holds every money transfer made with the status to keep a record of each transaction in case something went wrong am able to compute all wallets amount using this table.
Now there are other ways of doing it like:
Stored procedure: but I want my logic to be in the application level
Making a synchronized method to handle the database logic: this way all the database requests are executed in a single thread, I read a blog post that advise about using this approach but maybe we'll use multiple servers for scalability
I don't know if I'm not searching well but I can't find good material for handling pessimistic concurrency with Entity Framework Core, even while browsing Github, most of code I've seen don't use locking.
Which bring me to my question: is this the correct way of doing it?
Cheers and thanks in advance.
My suggestion for you is to catch on DbUpdateConcurrencyException and use entry.GetDatabaseValues(); and entry.OriginalValues.SetValues(databaseValues); into your retry logic. No need to lock the DB.
Here is the sample on EF Core documentation page:
using (var context = new PersonContext())
{
// Fetch a person from database and change phone number
var person = context.People.Single(p => p.PersonId == 1);
person.PhoneNumber = "555-555-5555";
// Change the person's name in the database to simulate a concurrency conflict
context.Database.ExecuteSqlCommand(
"UPDATE dbo.People SET FirstName = 'Jane' WHERE PersonId = 1");
var saved = false;
while (!saved)
{
try
{
// Attempt to save changes to the database
context.SaveChanges();
saved = true;
}
catch (DbUpdateConcurrencyException ex)
{
foreach (var entry in ex.Entries)
{
if (entry.Entity is Person)
{
var proposedValues = entry.CurrentValues;
var databaseValues = entry.GetDatabaseValues();
foreach (var property in proposedValues.Properties)
{
var proposedValue = proposedValues[property];
var databaseValue = databaseValues[property];
// TODO: decide which value should be written to database
// proposedValues[property] = <value to be saved>;
}
// Refresh original values to bypass next concurrency check
entry.OriginalValues.SetValues(databaseValues);
}
else
{
throw new NotSupportedException(
"Don't know how to handle concurrency conflicts for "
+ entry.Metadata.Name);
}
}
}
}
}
You can use distributed lock mechanism with redis for example.
Also, you can lock by userId, it will not lock method for others.
Why don't you handle the concurrency problem in the code, why it needs to be in the DB layer?
You can have a method that updates the value of given wallet with given value and you can use simple lock there. Like this:
private readonly object walletLock = new object();
public void UpdateWalletAmount(int userId, int amount)
{
lock (balanceLock)
{
Wallet w = _context.Wallets.Where(ww => ww.UserId == userId).FirstOrDefault();
w.Amount = w.Amount + amount;
w.Inserts++;
_context.Wallets.Update(w);
_context.SaveChanges();
}
}
So your methods will look like this:
[HttpGet]
[Route("request1")]
public async Task<string> request1()
{
try
{
UpdateWalletAmount(1, 10);
}
catch (Exception ex)
{
// log error
}
return "request 1 executed";
}
[HttpGet]
[Route("request2")]
public async Task<string> request2()
{
try
{
UpdateWalletAmount(1, 20);
}
catch (Exception ex)
{
// log error
}
return "request 2 executed";
}
You don't even need to use a transaction in this context.

Why is the Entity Framework inserting when it should update?

I use the following RIA Services call to register and return a Project entity.
// On Server; inside RIA Domain Service
[Invoke]
public Project CreateNewProject(String a_strKioskNumber)
{
Decimal dProjectID = ObjectContext.RegisterProjectNumber(a_strKioskNumber)
.FirstOrDefault() ?? -1m;
// Tried this but it returned zero (0)
//int nChanged = ObjectContext.SaveChanges();
var project = (from qProject in ObjectContext.Projects.Include("ProjectItems")
where qProject.ID == dProjectID
select qProject)
.FirstOrDefault();
if (project == null)
return null;
return project;
}
As you can see, it calls a stored procedure that returns a project ID. It uses this ID to look up the Project entity itself and return it. When the Project entity is returned to the client it is detached. I attach it to the DomainContext and modify it.
// At Client
_activeProject = a_invokeOperation.Value; // <-- Detached
_context.Projects.Attach(_activeProject); // <-- Unmodified
if (_activeProject != null)
{
_activeProject.AuthenticationType = "strong"; // <-- Modified
_activeProject.OwnerID = customer.ID;
_projectItems.Do(pi => _activeProject.ProjectItems.Add(pi));
_activeProject.Status = "calculationrequired";
}
At this point it has an entity state of Modified. When I submit changes it gives me an exception regarding a UNIQUE KEY violation as if it is trying to insert it rather than update it.
// At Client
_context.SubmitChanges(OnProjectSaved, a_callback);
I'm using the same DomainContext instance for all operations. Why should this not work?
What's going wrong? This is rather frustrating.
Edits:
I tried this (as suggested by Jeff):
[Invoke]
public void SaveProject(Project a_project)
{
var project = (from qProject in ObjectContext.Projects
where qProject.ID == a_project.ID
select qProject)
.FirstOrDefault();
project.SubmitDate = a_project.SubmitDate;
project.PurchaseDate = a_project.PurchaseDate;
project.MachineDate = a_project.MachineDate;
project.Status = a_project.Status;
project.AuthenticationType = a_project.AuthenticationType;
project.OwnerID = a_project.OwnerID;
project.ProjectName = a_project.ProjectName;
project.OwnerEmail = a_project.OwnerEmail;
project.PricePerPart = a_project.PricePerPart;
project.SheetQuantity = a_project.SheetQuantity;
project.EdgeLength = a_project.EdgeLength;
project.Price = a_project.Price;
project.ShipToStoreID = a_project.ShipToStoreID;
project.MachiningTime = a_project.MachiningTime;
int nChangedItems = ObjectContext.SaveChanges();
}
It did absolutely nothing. It didn't save the project.
What happens if you add a SaveProject method on the server side and send the object back to the server for saving?
I've not done EF with RIA Services, but I've always sent my objects back to the server for saving. I'm assuming that SubmitChanges call you are making wires up everything properly for you for sending it back to the server, but perhaps it is doing something wrong and handling it manually will fix it.
I dont have the source at the moment but I have seen it recommended that you use a new context for each operation in Silverlight. I ran into a similar problem today and it was because I was using a Service level context that was remembering previous values that I didnt want, I changed to creating a new context for each service call and the behavior became what I expected.
public void SaveResponses(ICollection<Responses> items, Action<SubmitOperation> callback)
{
try
{
SurveysDomainContext _context = new SurveysDomainContext();
foreach (Responses item in items)
{
_context.Responses.Add(item);
}
_context.SubmitChanges(callback, null);
}
catch (Exception)
{
throw;
}
}
As for the notion that one can't use a singleton global DomainContext, this is actually debatable. In my project I use a singleton DomainContext with no issues. In other projects, we have created a new DomainContext for different modules in the app where the entities are reused. There are definitely pros and cons. See:
Strategies for Handling Your DomainContext (external blog)
It seems that the problem is that when you attach your project to the DomainContext it checks the _context.Projects entityset and isn't finding an entity with that primary key, and then assumes that the newly attached entity doesn't exist serverside yet and that submitting changes should insert it. A possible workaround might be to explicitly load the newly created Project into the DomainContext. It would ensure that it sets the correct state on the entity--that is, that the project already exists on the server and that that it's an update instance, rather than an insert instance.
So maybe something like:
//after your Project has already been created serverside with the invoke
_context.Load(_context.SomeQueryThatLoadsYourNewlyCreatedProject(), LoadBehavior.RefreshCurrent, (LoadOperation lo) => {
Project project = lo.Entities.FirstOrDefault(); //is attached and has correct state
if (project != null)
{
project.AuthenticationType = "strong";
project.OwnerID = customer.ID;
project.Do(pi => _activeProject.ProjectItems.Add(pi));
project.Status = "calculationrequired";
_context.SubmitChanges(); //hopefully will trigger an update, rather than an insert
}
});

Resources