Grails 3 Runtime Automated Database Custom SQL - database

We have a large application with about 100 tables. We have been updating the database manually between releases.
We also recently switched from Grails 2 to Grails 3.
We now need the Application to (for the first time in a while) create a new Database from scratch, and this is the first time since using Grails 3.
The database is varied enough that it needs some manual runtime customization.
To complicate matters, the application is also using Quartz.
The problem is:
After the Application initializes the tables, but before the Application and Quartz start running, we need to inject this set of custom SQL. Where is the correct place to insert the SQL?
I have tried some things like Application.groovy and Bootstrap.groovy, but have not been able to determine (with Grails 3) the appropriate location to inject this custom SQL. For example the Quartz tasks are running and attempting to access certain tables, but they have not been "corrected" yet so the app is throwing errors.
UPDATE
I tried the following in Bootstrap.groovy
Tag.withTransaction {
String updateSQL = "ALTER TABLE tag DROP COLUMN class;"
def sql = new groovy.sql.Sql(dataSource)
sql.executeUpdate(updateSQL)
}
Tag.withTransaction {
Tag newTag
newTag = new Tag(name: 'TAG 1').save(flush: true)
}
Tag.withTransaction {
List tags = Tag.findAll()
println("=== Tag Size = ${tags.size()}")
}
The executeUpdate() throws an exception that column class cannot be found.
However if I reorder the three sections and use the following:
Tag.withTransaction {
Tag newTag
newTag = new Tag(name: 'TAG 1').save(flush: true)
}
Tag.withTransaction {
List tags = Tag.findAll()
println("=== Tag Size = ${tags.size()}")
}
Tag.withTransaction {
String updateSQL = "ALTER TABLE tag DROP COLUMN class;"
def sql = new groovy.sql.Sql(dataSource)
sql.executeUpdate(updateSQL)
}
then the executeUpdate() completes successfully (although it is still too late as the Quartz jobs are already running).
I do not understand this at all.
Thank you for the suggestions. For now I will try the Database Migration Plugin, but would still appreciate other suggestions.

Related

Dotmim.Sync is throwing exception when synchronizing existing SQLite with SQL Server databases

I get a Dotmim.Sync.SyncException when calling the agent.SynchronizeAsync(tables) function:
Exception: Seems you are trying another Setup tables that what is stored in your server scope database. Please make a migration or create a new scope
This is my code:
public static async Task SynchronizeAsync()
{
var serverProvider = new SqlSyncProvider(serverConnectionString);
// Second provider is using plain old Sql Server provider, relying on triggers and tracking tables to create the sync environment
var clientProvider = new SqliteSyncProvider(Path.Combine(FileSystem.AppDataDirectory, "treesDB.db3"));
// Tables involved in the sync process:
var tables = new string[] { "Trees" };
// Creating an agent that will handle all the process
var agent = new SyncAgent(clientProvider, serverProvider);
// Launch the sync process
var s1 = await agent.SynchronizeAsync(tables);
await agent.LocalOrchestrator.UpdateUntrackedRowsAsync();
var s2 = await agent.SynchronizeAsync();
}
I'm the author of Dotmim.Sync
Do not hesitate to to fill an issue on Github if you are still struggling.
Regarding your issue, I think you have made some tests with different tables.
You need to stick with a set of tables, because DMS needs to create different things (triggers / stored proc and so on)
If you want to test different setups, you need to define differents scopes.
You have a complete documentation on https://dotmimsync.readthedocs.io/

How to configure Ignite to work as a full distributed database?

I'm trying to manage a decentralized DB around a huge number of partial DB instances. Each instance has a subset of the whole data and they are all nodes and clients, thus asking for some data the query must be spread to every (group) instance and which one have it will return the data.
Due to avoid lost of data if one instance goes down, I figured out they must replicate its contents with some others. How this scenario can be configured with Ignite?
Supose I have a table with the name and last access datetime of users in a distributed application, like ...
class UserLogOns
{
string UserName;
DateTime LastAccess;
}
Now when the program starts I prepare Ingite for work as a decentralizad DB ...
static void Main(string[] args)
{
TcpCommunicationSpi commSpi = new TcpCommunicationSpi();
// Override local port.
commSpi.LocalPort = 44444;
commSpi.LocalPortRange = 0;
IgniteConfiguration cfg = new IgniteConfiguration();
// Override default communication SPI.
cfg.CommunicationSpi = commSpi;
using (var ignite = Ignition.Start(cfg))
{
var cfgCache = new CacheConfiguration("mio");
cfgCache.AtomicityMode = CacheAtomicityMode.Transactional;
var cache = ignite.GetOrCreateCache<string, UserLogOns>(cfgCache);
cache.Put(Environment.MachineName, new UserLogOns { UserName = Environment.MachineName, LastAccess = DateTime.UtcNow });
}
}
And now ... I want to get LastAccess of other "computerB" when ever it was ..
Is this correct? How can it be implemented?
It depends on the exact use-case that you want to implement. In general, Ignite provides out of the box everything that you mentioned here.
This is a good way to start with using SQL in Ignite: https://apacheignite-sql.readme.io/docs
Create table with "template=partitioned" instead of "replicated" as it is shown in the example here: https://apacheignite-sql.readme.io/docs/getting-started#section-creating-tables, configure number of backups and select a field to be affinity key (a field that is used to map specific entries to cluster node) and just run some queries.
Also check out the concept of baseline topology if you are going to use native persistence: https://apacheignite.readme.io/docs/baseline-topology.
In-memory mode will trigger rebalance between nodes on each server topology change (node that can store data in/out) automatically.

How would I configure Effort Testing Tool to mock Entity Framework's DbContext withOut the actual SQL Server Database up and running?

Our team's application development involves using Effort Testing Tool to mock our Entity Framework's DbContext. However, it seems that Effort Testing Tool needs to be see the actual SQL Server Database that the application uses in order to mock our Entity Framework's DbContext which seems to going against proper Unit Testing principles.
The reason being that in order to unit test our application code by mocking anything related to Database connectivity ( for example Entity Framework's DbContext), we should Never need a Database to be up and running.
How would I configure Effort Testing Tool to mock Entity Framework's DbContext withOut the actual SQL Server Database up and running?
*
Update:
#gert-arnold We are using Entity Framework Model First approach to implement the back-end model and database.
The following excerpt is from the test code:
connection = Effort.EntityConnectionFactory.CreateTransient("name=NorthwindModel");
jsAudtMppngPrvdr = new BlahBlahAuditMappingProvider();
fctry = new BlahBlahDataContext(jsAudtMppngPrvdr, connection, false);
qryCtxt = new BlahBlahDataContext(connection, false);
audtCtxt = new BlahBlahAuditContext(connection, false);
mockedReptryCtxt = new BlahBlahDataContext(connection, false);
_repository = fctry.CreateRepository<Account>(mockedReptryCtxt, null);
_repositoryAccountRoleMaps = fctry.CreateRepository<AccountRoleMap>(null, _repository);
The "name=NorthwindModel" pertains to our edmx file which contains information about our Database tables
and their corresponding relationships.
If I remove the "name=NorthwindModel" by making the connection like the following line of code, I get an error stating that it expects an argument:
connection = Effort.EntityConnectionFactory.CreateTransient(); // throws error
Could you please explain how the aforementioned code should be rewritten?
You only need that connection string because Effort needs to know where the EDMX file is.
The EDMX file contains all information required for creating an inmemory store with an identical schema you have in your database. You have to specify a connection string only because I thought it would be convenient if the user didn't have to mess with EDMX paths.
If you check the implementation of the CreateTransient method you will see that it merely uses the connection string to get the metadata part of it.
public static EntityConnection CreateTransient(string entityConnectionString, IDataLoader dataLoader)
{
var metadata = GetEffortCompatibleMetadataWorkspace(ref entityConnectionString);
var connection = DbConnectionFactory.CreateTransient(dataLoader);
return CreateEntityConnection(metadata, connection);
}
private static MetadataWorkspace GetEffortCompatibleMetadataWorkspace(ref string entityConnectionString)
{
entityConnectionString = GetFullEntityConnectionString(entityConnectionString);
var connectionStringBuilder = new EntityConnectionStringBuilder(entityConnectionString);
return MetadataWorkspaceStore.GetMetadataWorkspace(
connectionStringBuilder.Metadata,
metadata => MetadataWorkspaceHelper.Rewrite(
metadata,
EffortProviderConfiguration.ProviderInvariantName,
EffortProviderManifestTokens.Version1));
}

Is there any way to trace\log the sql using Dapper?

Is there a way to dump the generated sql to the Debug log or something? I'm using it in a winforms solution so the mini-profiler idea won't work for me.
I got the same issue and implemented some code after doing some search but having no ready-to-use stuff. There is a package on nuget MiniProfiler.Integrations I would like to share.
Update V2: it supports to work with other database servers, for MySQL it requires to have MiniProfiler.Integrations.MySql
Below are steps to work with SQL Server:
1.Instantiate the connection
var factory = new SqlServerDbConnectionFactory(_connectionString);
using (var connection = ProfiledDbConnectionFactory.New(factory, CustomDbProfiler.Current))
{
// your code
}
2.After all works done, write all commands to a file if you want
File.WriteAllText("SqlScripts.txt", CustomDbProfiler.Current.ProfilerContext.BuildCommands());
Dapper does not currently have an instrumentation point here. This is perhaps due, as you note, to the fact that we (as the authors) use mini-profiler to handle this. However, if it helps, the core parts of mini-profiler are actually designed to be architecture neutral, and I know of other people using it with winforms, wpf, wcf, etc - which would give you access to the profiling / tracing connection wrapper.
In theory, it would be perfectly possible to add some blanket capture-point, but I'm concerned about two things:
(primarily) security: since dapper doesn't have a concept of a context, it would be really really easy for malign code to attach quietly to sniff all sql traffic that goes via dapper; I really don't like the sound of that (this isn't an issue with the "decorator" approach, as the caller owns the connection, hence the logging context)
(secondary) performance: but... in truth, it is hard to say that a simple delegate-check (which would presumably be null in most cases) would have much impact
Of course, the other thing you could do is: steal the connection wrapper code from mini-profiler, and replace the profiler-context stuff with just: Debug.WriteLine etc.
You should consider using SQL profiler located in the menu of SQL Management Studio → Extras → SQL Server Profiler (no Dapper extensions needed - may work with other RDBMS when they got a SQL profiler tool too).
Then, start a new session.
You'll get something like this for example (you see all parameters and the complete SQL string):
exec sp_executesql N'SELECT * FROM Updates WHERE CAST(Product_ID as VARCHAR(50)) = #appId AND (Blocked IS NULL OR Blocked = 0)
AND (Beta IS NULL OR Beta = 0 OR #includeBeta = 1) AND (LangCode IS NULL OR LangCode IN (SELECT * FROM STRING_SPLIT(#langCode, '','')))',N'#appId nvarchar(4000),#includeBeta bit,#langCode nvarchar(4000)',#appId=N'fea5b0a7-1da6-4394-b8c8-05e7cb979161',#includeBeta=0,#langCode=N'de'
Try Dapper.Logging.
You can get it from NuGet. The way it works is you pass your code that creates your actual database connection into a factory that creates wrapped connections. Whenever a wrapped connection is opened or closed or you run a query against it, it will be logged. You can configure the logging message templates and other settings like whether SQL parameters are saved. Elapsed time is also saved.
In my opinion, the only downside is that the documentation is sparse, but I think that's just because it's a new project (as of this writing). I had to dig through the repo for a bit to understand it and to get it configured to my liking, but now it's working great.
From the documentation:
The tool consists of simple decorators for the DbConnection and
DbCommand which track the execution time and write messages to the
ILogger<T>. The ILogger<T> can be handled by any logging framework
(e.g. Serilog). The result is similar to the default EF Core logging
behavior.
The lib declares a helper method for registering the
IDbConnectionFactory in the IoC container. The connection factory is
SQL Provider agnostic. That's why you have to specify the real factory
method:
services.AddDbConnectionFactory(prv => new SqlConnection(conStr));
After registration, the IDbConnectionFactory can be injected into
classes that need a SQL connection.
private readonly IDbConnectionFactory _connectionFactory;
public GetProductsHandler(IDbConnectionFactory connectionFactory)
{
_connectionFactory = connectionFactory;
}
The IDbConnectionFactory.CreateConnection will return a decorated
version that logs the activity.
using (DbConnection db = _connectionFactory.CreateConnection())
{
//...
}
This is not exhaustive and is essentially a bit of hack, but if you have your SQL and you want to initialize your parameters, it's useful for basic debugging. Set up this extension method, then call it anywhere as desired.
public static class DapperExtensions
{
public static string ArgsAsSql(this DynamicParameters args)
{
if (args is null) throw new ArgumentNullException(nameof(args));
var sb = new StringBuilder();
foreach (var name in args.ParameterNames)
{
var pValue = args.Get<dynamic>(name);
var type = pValue.GetType();
if (type == typeof(DateTime))
sb.AppendFormat("DECLARE #{0} DATETIME ='{1}'\n", name, pValue.ToString("yyyy-MM-dd HH:mm:ss.fff"));
else if (type == typeof(bool))
sb.AppendFormat("DECLARE #{0} BIT = {1}\n", name, (bool)pValue ? 1 : 0);
else if (type == typeof(int))
sb.AppendFormat("DECLARE #{0} INT = {1}\n", name, pValue);
else if (type == typeof(List<int>))
sb.AppendFormat("-- REPLACE #{0} IN SQL: ({1})\n", name, string.Join(",", (List<int>)pValue));
else
sb.AppendFormat("DECLARE #{0} NVARCHAR(MAX) = '{1}'\n", name, pValue.ToString());
}
return sb.ToString();
}
}
You can then just use this in the immediate or watch windows to grab the SQL.
Just to add an update here since I see this question still get's quite a few hits - these days I use either Glimpse (seems it's dead now) or Stackify Prefix which both have sql command trace capabilities.
It's not exactly what I was looking for when I asked the original question but solve the same problem.

Issue with getting database via Sitecore API

We noticed a slight oddity in the Sitecore API code. The code is below for your reference. The code is trying to get a database by doing new Database(database). But randomly it was failing.
This code worked for a while with Database db = new Database(database); but started failing randomly yesterday. When we changed the code to Database db = Database.GetDatabase(database);, the code started working again. What is the difference between the two approaches and what is recommended by Sitecore?
I've seen this happen twice now - multiple times in production and a couple of times in my development environment.
public static void DeleteItem(string id, stringdatabase)
{
//get the database
Database db = new Database(database);
//get the item
item = db.GetItem(new ID(id));
if (item != null)
{
using(new Sitecore.SecurityModel.SecurityDisabler())|
{
//delete the item
item.Delete();
}
}
}
A common way you will see people get a specific database is:
Sitecore.Data.Database master = Sitecore.Configuration.Factory.GetDatabase("master");
This is equivalent to Sitecore.Data.Database.GetDatabase("master").
When you call either of these methods it will first check the cache for the database. If not found it will build up the database with all of the configuration values within the config file via reflection. Once the database is created it will be placed in the cache for future use.
When you use the constructor on the database it is simply creating a rather empty database object. I am rather suprised to hear it was working at all when you used this method.
The proper approach to get a specific database would be to use:
Sitecore.Configuration.Factory.GetDatabase("master");
// or
Sitecore.Data.Database.GetDatabase("master");
If you are looking to get the database used with the current request (aka context database) you can use Sitecore.Context.Database. You can also use Sitecore.Context.ContentDatabase.

Resources