Creating a cross-computer mutex using SQL Server - sql-server

I have a few computers using the same database (SQL Server 2008)
I'm trying to synchronize a task between all these computers using the database.
Each task is represented by a guid that is the lock-id (if comparing to Mutex, that would be the Mutex name)
I have a few thoughts, but I think they are kind of hacks, and was hoping someone here would have a better solution:
Create a new table "Locks" each row consists of a guid, lock the table row exclusively in a transaction - and complete/revert the transaction when finished.
use the sp_getapplock in a transaction where the lock name is the lock-id guid
I think that holding a transaction running is not so good... I thought maybe there's a solution that does not require me to hold an open transaction or session?

I have put together a little class will test and feedback
public class GlobalMutex
{
private SqlCommand _sqlCommand;
private SqlConnection _sqlConnection;
string sqlCommandText = #"
declare #result int
exec #result =sp_getapplock #Resource=#ResourceName, #LockMode='Exclusive', #LockOwner='Transaction', #LockTimeout = #LockTimeout
";
public GlobalMutex(SqlConnection sqlConnection, string unqiueName, TimeSpan lockTimeout)
{
_sqlConnection = sqlConnection;
_sqlCommand = new SqlCommand(sqlCommandText, sqlConnection);
_sqlCommand.Parameters.AddWithValue("#ResourceName", unqiueName);
_sqlCommand.Parameters.AddWithValue("#LockTimeout", lockTimeout.TotalMilliseconds);
}
private readonly object _lockObject = new object();
private Locker _lock = null;
public Locker Lock
{
get
{
lock(_lockObject)
{
if (_lock != null)
{
throw new InvalidOperationException("Unable to call Lock twice"); // dont know why
}
_lock = new Locker(_sqlConnection, _sqlCommand);
}
return _lock;
}
}
public class Locker : IDisposable
{
private SqlTransaction _sqlTransaction;
private SqlCommand _sqlCommand;
internal Locker(SqlConnection sqlConnection, SqlCommand sqlCommand)
{
_sqlCommand = sqlCommand;
_sqlTransaction = sqlConnection.BeginTransaction();
_sqlCommand.Transaction = _sqlTransaction;
int result = sqlCommand.ExecuteNonQuery();
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if (disposing)
{
_sqlTransaction.Commit(); // maybe _sqlTransaction.Rollback() might be slower
}
}
}
}
Usage is:
GlobalMutex globalMutex = new GlobalMutex(
new SqlConnection(""),
"myGlobalUniqueLockName",
new TimeSpan(0, 1, 0)
);
using (globalMutex.Lock)
{
// do work.
}

I would recommend something rather different: use a queue. Rather than explicitly lock the task, add the task to a processing queue and let the queue handler dequeue the task and perform the work. The additional decoupling will also help scalability and throughput.

If the only shared resource you have is the database, then using a transaction lock as part of the solution may be your best option. And if I'm understanding the articled linked by #Remus Rusanu in the other answer, it also requires the dequeuing to be in a transaction.
It depends somewhat on how long you plan to hold open these locks. If you are...
Forcing serialization for a brief operation on the lock ID in question
Already opening a transaction anyway to complete that operation
...then your option 2 is probably easiest and most reliable. I've had a solution like it in a production system for several years with no issues. It becomes even easier if you bundle the creation of the mutex with the creation of the transaction and wrap it all in a "using" block.
using (var transaction = MyTransactionUtil.CreateTransaction(mutexName))
{
// do stuff
transaction.Commit();
}
In your CreateTransaction utility method you call sp_getapplock right after creating the transaction. Then the whole thing (including the mutex) commits or rolls back together.

Related

MSSQL replication triggers, how to handle conditional HasTrigger in EntityFrameworkCore

I am using EntityFrameworkCore version 7 to implement data access across a number of client databases.
I have recently run into the error 'Could not save changes because the target table has database triggers.' on one of the clients. The error is obviously self explanatory and I understand how to fix it using HasTrigger.
The problem is that this error has occurred because this specific client is replicated and has what I assume are auto generated triggers MSmerge_upd, MSmerge_ins, MSmerge_del. Concurrently the majority of my clients are not replicated and would therefore not have any of these triggers in their database.
So, what is the correct way to handle replication triggers in EntityFrameworkCore particularly when your clients have a mishmash where some are replicated and some are not? Is there a way to check inside IEntityTypeConfiguration if you are running on a replicated database and conditionally add the replication triggers? Is there some sort of best practice in terms of how to handle this scenario with the new HasTriggers requirement?
Given that nobody has posted any answer I will post what my workaround is for now.
I have created a class called AutoTriggerBuilderEntityTypeConfiguration which basically attempts to configure all the triggers for a given EF model.
There are some performance implications with this approach and it could potentially be improved by caching the triggers for all tables across the database but its sufficient for my use case.
It looks like this:
public abstract class AutoTriggerBuilderEntityTypeConfiguration<TEntity> : IEntityTypeConfiguration<TEntity>
where TEntity : class
{
private readonly string _connectionString;
public AutoTriggerBuilderEntityTypeConfiguration(string connectionString)
{
this._connectionString = connectionString;
}
public void Configure(EntityTypeBuilder<TEntity> builder)
{
this.ConfigureEntity(builder);
var tableName = builder.Metadata.GetTableName();
var tableTriggers = this.GetTriggersForTable(tableName);
var declaredTriggers = builder.Metadata.GetDeclaredTriggers();
builder.ToTable(t =>
{
foreach (var trigger in tableTriggers)
{
if (!declaredTriggers.Any(o => o.ModelName.Equals(trigger, StringComparison.InvariantCultureIgnoreCase)))
t.HasTrigger(trigger);
}
});
}
private IEnumerable<string> GetTriggersForTable(string tableName)
{
var result = new List<string>();
using (var connection = new SqlConnection(this._connectionString))
using (var command = new SqlCommand(#"SELECT sysobjects.name AS Name FROM sysobjects WHERE sysobjects.type = 'TR' AND OBJECT_NAME(parent_obj) = #TableName", connection)
{
CommandType = CommandType.Text
})
{
connection.Open();
command.Parameters.AddWithValue("#TableName", tableName);
using (var reader = command.ExecuteReader())
{
while (reader.Read())
result.Add(reader.GetString("Name"));
}
}
return result;
}
public abstract void ConfigureEntity(EntityTypeBuilder<TEntity> builder);
}

Spring #Transactional does not begin new transaction on MS SQL

I'm having trouble with transactions in Spring Boot using #Transactional annotation. The latest Spring is connected to a MS SQL Database.
I have following service, which periodically executes transactional method according to some criteria:
#Service
public class SomeService {
SomeRepository repository;
public SomeService(SomeRepository someRepository) {
this.repository = someRepository;
}
#Scheduled(fixedDelayString="${property}") //10 seconds
protected scheduledIteration() {
if(something) {
insertDataInNewTransaction(getSomeData());
}
}
#Transactional(propagation = Propagation.REQUIRED, rollbackFor = Exception.class)
protected void insertDataInNewTransaction(List<Data> data) {
//insert data to db
repository.saveAll(data);
//call verify proc
repository.verifyData();
}
}
The algorithm supposed to process data, insert them into table and perform check (db procedure). If the procedure throws an exception, the transaction should be rollbacked. I'm sure, that the procedure does not perform commit of the transaction.
The problem I'm facing is, that calling the method does not begin new transaction (or does but it's auto-commited), because I've tried following:
#Transactional(propagation = Propagation.REQUIRED, rollbackFor = Exception.class)
protected void insertDataInNewTransaction(List<Data> data) {
int counter = 0;
for(Data d : data) {
repository.save(d);
counter++;
//test
if(counter == 10) {
throw new Exception("test");
}
}
}
After the test method is executed, the first 10 rows remain in the table, where they were supposed to be rollbacked. During debugging I've noticed, that calling repository.save() in the loop inserts to the table outside transaction, because I can see the row from DB IDE while debugger sitting on next row. This gave me an idea, that the problem is caused by auto-commit, as it's MS SQL default. So I have tried to add following properties, but without any difference:
spring.datasource.hikari.auto-commit=false
spring.datasource.auto-commit=false
Is there anything I'm doing wrong?
If you use Spring Proxy AOP, then you need to turn the method insertDataInNewTransaction as public.
Remember that if the method is public, but it is invoked from the same bean, it will not create a new transaction (because spring proxies won't be call).
Short answer:
#Transactional(propagation = Propagation.REQUIRED, rollbackFor = Exception.class)
public void insertDataInNewTransaction(List<Data> data) {
//insert data to db
repository.saveAll(data);
//call verify proc
repository.verifyData();
}
But if you really need a new separate transaction use Propagation.REQUIRES_NEW instead of Propagation.REQUIRED.

Flink - Why should I create my own RichSinkFunction instead of just open and close my PostgreSql connection?

I would like to know why I really need to create my own RichSinkFunction or use JDBCOutputFormat to connect on the database instead of just Create my connection, perform the query and close the connection using the traditional PostgreSQL drivers inside my SinkFunction?
I found many articles telling do to that but does not explain why? What is the difference?
Code example using JDBCOutputFormat,
JDBCOutputFormat jdbcOutput = JDBCOutputFormat.buildJDBCOutputFormat()
.setDrivername("org.postgresql.Driver")
.setDBUrl("jdbc:postgresql://localhost:1234/test?user=xxx&password=xxx")
.setQuery(query)
.setSqlTypes(new int[] { Types.VARCHAR, Types.VARCHAR, Types.VARCHAR }) //set the types
.finish();
Code example implementing the own RichSinkFunction,
public class RichCaseSink extends RichSinkFunction<Case> {
private static final String UPSERT_CASE = "INSERT INTO public.cases (caseid, tracehash) "
+ "VALUES (?, ?) "
+ "ON CONFLICT (caseid) DO UPDATE SET "
+ " tracehash=?";
private PreparedStatement statement;
#Override
public void invoke(Case aCase) throws Exception {
statement.setString(1, aCase.getId());
statement.setString(2, aCase.getTraceHash());
statement.setString(3, aCase.getTraceHash());
statement.addBatch();
statement.executeBatch();
}
#Override
public void open(Configuration parameters) throws Exception {
Class.forName("org.postgresql.Driver");
Connection connection =
DriverManager.getConnection("jdbc:postgresql://localhost:5432/casedb?user=signavio&password=signavio");
statement = connection.prepareStatement(UPSERT_CASE);
}
}
why I cannot just use the PostgreSQL driver?
public class Storable implements SinkFunction<Activity>{
#Override
public void invoke(Activity activity) throws Exception {
Class.forName("org.postgresql.Driver");
try(Connection connection =
DriverManager.getConnection("jdbc:postgresql://localhost:5432/casedb?user=signavio&password=signavio")){
statement = connection.prepareStatement(UPSERT_CASE);
//Perform the query
//close connection...
}
}
}
Does someone know the technical answer to the best practice in Flink?
Does Implementation of RichSinkFunction or usage of JDBCOutputFormat do something special?
Thank you in advance.
Well You can use your own SinkFunction that will simply use invoke() method to open connection and write data and it should work in general. But it's performance will be very, very poor in most cases.
The actual difference between first example and the second example is the fact that in the RichSinkFunction you are using open() method to open the connection and prepare the statement. This open() method is invoked only once when the function is initialized. In the second example you will open the connection to the database and prepare statement inside the invoke() method, which is invoked for every element of the input DataStream.You will actually open a new connection for every element in the stream.
Creating a database connection is expensive thing to do, and it will for sure have terrible performance drawbacks.

At what point is a transaction commited?

I've read about entities lifecycle, and the locking strategies, and I watched some videos about this but I'm still not sure I understand.I understand there is also a locking mechanism in the underlying RDBMS (I'm using mysql).
I would like to know at what point a transaction is committed / entity is detached and how does it affect other transactions from a locking point of view. At what point does an user have to wait till a transaction finishes ? I've made two different scenarios below. For the sake of understanding I'm asserting the table in the scenarios contains a lot of rows and the for loops takes 10 minute to complete.
Scenario 1:
#Stateless
public class AService implements AServiceInterface {
#PersistenceContext(unitName = "my-pu")
private EntityManager em;
#Override
public List<Aclass> getAll() {
Query query = em.createQuery(SELECT_ALL_ROWS);
return query.getResultList();
}
public void update(Aclass a) {
em.merge(a);
}
}
and a calling class:
public aRadomClass{
#EJB
AServiceInterface service;
public void method(){
List<Aclass> listAclass = service.getAll();
for(Aclass a : listAclass){
a.setProperty(methodThatTakesTime());
service.update(a);
}
}
}
Without specifying a locking strategy : If another user wants to makes an update to one row in the table and the for loop already began but is not finished. Does he have to wait till the for loop is completed ?
Scenario 2:
#Stateless
public class AService implements AServiceInterface {
#PersistenceContext(unitName = "my-pu")
private EntityManager em;
#Override
public List<Aclass> getAllAndUpdate() {
Query query = em.createQuery(SELECT_ALL_ROWS);
List<Aclass> listAclass = query.getResultList();
for(Aclass a : listAclass ){
a.setProperty(methodThatTakesTime());
em.merge(a);
}
}
}
Same question.
It is important what kind of class is your aRandomClass. If it is also an EJB, you should take a look in the transaction propagation. If it is a servlet, then the transaction is closed automatically right after your EJB method exits (no matter which one). That is done using dynamic proxies. So in scenario 1 the EJB container will open and close multiple transactions: one for service.getAll() and one for each service.update(a) call. In scenario 2, if method getAllAndUpdate() is called only once, a single transaction will be opened and it will be closed on method exit.

AccessViolation Exception in wpf while connecting to db due to multiple threads

I am working on a multithreaded wpf application I get "AccessViolationException" saying Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
in my ConnectionOpen().
my code is as follows.
public class DatabaseServices
{
static SQLiteConnection connection;
static object conLock = new object();
static object conCloseLock = new object();
public static SQLiteDataReader ConnectionOpen(string Query)
{
lock (conLock)
{
if (connection != null && connection.State != System.Data.ConnectionState.Open)
{
connection = new SQLiteConnection("Data Source=Database/abc.sqlite");
connection.Open();
}
else if (connection == null)
{
connection = new SQLiteConnection("Data Source=Database/abc.sqlite");
connection.Open();
}
SQLiteCommand mycommand = new SQLiteCommand(Query, connection);
SQLiteDataReader sqlite_datareader = mycommand.ExecuteReader();
return sqlite_datareader;
}
}
public static void ConnectionClose()
{
lock (conCloseLock)
{
connection.Close();
}
}
}
I've used lock as well for thread safe code but its not working.why?
The SQLiteConnection is not thread safe. Like all other database connections, you are supposed to open one per thread. The fact that your code has a few parts that won't work even if it were thread safe, does not help either. For example, anybody can close a connection, while somebody else is just querying on it.
Keep to the well-established patterns. Do not use database connections across threads. Do not write your own connection caching. Open a conection, do your work and then close it. If you definetly need connection caching, look into the documentation of your database and find out how the inbuilt mechanism works.
SQLite does not support Multiple Active ResultSets (MARS)
So you cannot have multiple DataReaders served by the same connection.
After connecting (and dropping the lock) you hand out a DataReader. I assume the client code calls this ConnectionOpen method twice resulting (or rather attempting) to re-use the same connection.
Create a connection per DataReader.
When you use connection pooling:
Data Source=c:\mydb.db;Version=3;Pooling=True;Max Pool Size=100;
connections will be recycled/pooled when closed properly. This will lessen the overhead of the creation and opening of the connections.

Resources