How to modify the DACPAC model at build time? - sql-server

I want to alter the table model during build time in my BuildContributor. Here is some sample code:
using Microsoft.SqlServer.Dac.Deployment;
using Microsoft.SqlServer.Dac.Extensibility;
using Microsoft.SqlServer.Dac.Model;
using System.Collections.Generic;
using System.Linq;
namespace MyNamespace
{
[ExportBuildContributor("MyNamespace.MyBuildContributor", "1.0.0.0")]
public class MyBuildContributor : BuildContributor
{
protected override void OnExecute(BuildContributorContext context, IList<ExtensibilityError> messages)
{
foreach (var table in context.Model.GetObjects(DacQueryScopes.UserDefined, ModelSchema.Table))
{
var tableName = table.Name.Parts.Last();
var rowId = "alter table " + tableName + " add rowid uniqueidentifier";
context.Model.AddObjects(rowId);
}
}
}
}
The build succeeds with no errors but I don't see rowid in any of the tables when I go look in the model.xml file in bin\Debug\MyDb.dacpac.

You can't use Model.AddObjects in this context.
Model.AddObjects from (https://msdn.microsoft.com/en-us/library/microsoft.sqlserver.dac.model.tsqlmodel.addobjects(v=sql.120).aspx#M:Microsoft.SqlServer.Dac.Model.TSqlModel.AddObjects(System.String)):
"Adds objects to the model based on the contents of a TSql Script string. The script should consist of valid TSql DDL statements. Objects added using this method cannot be updated or deleted at a later point as update/delete requires a script name to be specified when adding the objects. If this is a requirement use the AddOrUpdateObjects method instead."
I.E it can only add objects like tables or stored procedure, columns by themselves aren't added to the model.
If you want to update an existing object (i.e. to add a column to an existing table) you will need to use "TSqlModel.AddOrUpdateObjects" which also takes a script name. You can get the script name from a build contributor by using:
var sourceName = table.GetSourceInformation().SourceName;
Then you can build the updated script you want (just a rough outline of rebuilding the SQL for stack overflow, I'm sure you can do better):
var sql = table.GetScript();
sql = sql.Trim().TrimEnd(')', ';') + ", rowid uniqueidentifier);";
var sourceName = table.GetSourceInformation().SourceName;
model.AddOrUpdateObjects(sql, sourceName, new TSqlObjectOptions());
There are a few ways you could create your new script but basically what you need is a new script which has your extra column and the original table definition which you can pass to AddorUpdateObjects to overwrite the original create table statement.
If you don't get a source to use in AddorUpdateObjects then maybe you could use a post-deploy script to add it to any table you need and then use a deployment contributor to remove the drop column step.
You could also look at using a deployment contributor instead to add the new column step to that.
Hope it helps! Let me know how you get on :)

Related

SQL Server : get messages from referenced entities procedure in code

I'm running big dependency scan on legacy db and see that some objects have obsolete ref links, if you run this code in SSMS for View that points to not existing table like in my case, you will get your output on Results tab AND error info in Messages . Like in my case below.
I tried to check all env things I know and output of this stored procedure, but didn't see any indication.
How I can capture this event as I'm running this in looped dynamic SQL script and capture output in my table for further processing?
Updated:
it just text in Message box ,on error, you still have output on
Results tab
this is sp, it loop thru object list I took from sys.object and run this string as my sample to get all dependencies, load all into table. This call to
sql_reference_entities is the only way to get inter database
dependency on column level. So I need stick to this 100$>
--
Select *
From sys.dm_sql_referenced_entities('dbo.v_View_Obs_Table','Object')
--
----update------
This behavior was fixed in SQL Server 2014 SP3 and SQL Server 2016 SP2:
Starting from Microsoft SQL Server 2012, errors raised by
sys.dm_sql_referenced_entities (such as when an object has undergone a
schema change) cannot be caught in a TRY...CATCH Transact-SQL block.
While this behavior is expected in SQL Server 2012 and above, this
improvement introduces a new column that's called is_incomplete to the
Dynamic Management View (DMV).
KB4038418 - Update adds a new column to DMV sys.dm_sql_referenced_entities in SQL Server 2014 and 2016
----update-------
The tldr is that you can't capture these on the server side, and must use a client program in C#, PowerShell or some other client that can process info messages.
That DMV is doing something strange that I don't fully understand. It's generating errors (which a normal UDF is not allowed to do), and those errors do not trigger a TRY/CATCH block or set ##error. EG
create table tempdb.dbo.foo(id int)
go
create view dbo.v_View_Obs_Table
as
select * from tempdb.dbo.foo
go
drop table tempdb.dbo.foo
go
begin try
Select * From sys.dm_sql_referenced_entities('dbo.v_View_Obs_Table','Object')
end try
begin catch
select ERROR_MESSAGE(); --<-- not hit
end catch
However these are real errors, as you can see running this from client code:
using System;
using System.Data.SqlClient;
namespace ConsoleApp6
{
class Program
{
static void Main(string[] args)
{
using (var con = new SqlConnection("Server=.;database=AdventureWorks;integrated security=true"))
{
con.Open();
con.FireInfoMessageEventOnUserErrors = true;
con.InfoMessage += (s, a) =>
{
Console.WriteLine($"{a.Message}");
foreach (SqlError e in a.Errors)
{
Console.WriteLine($"{e.Message} Number:{e.Number} Class:{e.Class} State:{e.State} at {e.Procedure}:{e.LineNumber}");
}
};
var cmd = con.CreateCommand();
cmd.CommandText = "Select * From sys.dm_sql_referenced_entities('dbo.v_View_Obs_Table','Object')";
using (var rdr = cmd.ExecuteReader())
{
while (rdr.Read() || (rdr.NextResult() && rdr.Read()))
{
Console.WriteLine(rdr[0]);
}
}
Console.ReadKey();
}
}
}
}
outputs
Invalid object name 'tempdb.dbo.foo'.
Invalid object name 'tempdb.dbo.foo'. Number:208 Class:16 State:3 at v_View_Obs_Table:4
0
The dependencies reported for entity "dbo.v_View_Obs_Table" might not include references to all columns. This is either because the entity references an object that does not exist or because of an error in one or more statements in the entity. Before rerunning the query, ensure that there are no errors in the entity and that all objects referenced by the entity exist.
The dependencies reported for entity "dbo.v_View_Obs_Table" might not include references to all columns. This is either because the entity references an object that does not exist or because of an error in one or more statements in the entity. Before rerunning the query, ensure that there are no errors in the entity and that all objects referenced by the entity exist. Number:2020 Class:16 State:1 at :1

Audit trail with Entity Framework Core

I have an ASP.NET core 2.0 using Entity Framework core on a SQL Server db.
I have to trace and audit all the stuff made by the users on the data. My goal is to have an automatic mechanism writing all what is happening.
For example, if I have the table Animals, I want a parallele table "Audit_animals" where you can find all the info about the data, the operation type (add, delete, edit) and the user who made this.
I already made this time ago in Django + MySQL, but now the environment is different. I found this and it seems interesting, but I'd like to know if there are better ways and which is the best approach to do this in EF Core.
UPDATE
I'm trying this and something happens, but I have some problems.
I added this:
services.AddMvc().AddJsonOptions(options => {
options.SerializerSettings.ReferenceLoopHandling = ReferenceLoopHandling.Ignore;
});
public Mydb_Context(DbContextOptions<isMultiPayOnLine_Context> options) : base(options)
{
Audit.EntityFramework.Configuration.Setup()
.ForContext<Mydb_Context>(config => config
.IncludeEntityObjects()
.AuditEventType("Mydb_Context:Mydb"))
.UseOptOut()
}
public MyRepository(Mydb_Context context)
{
_context = context;
_context.AddAuditCustomField("UserName", "pippo");
}
I also created a table to insert the audits (only one to test this tool), but the only thing I got is what you see in the image. A list of json files with the data I created.... why??
Read the documentation:
Event Output
To configure the output persistence mechanism please see Configuration and Data Providers sections.
Then, in the documentation on Configuration:
If you don't specify a Data Provider, a default FileDataProvider will be used to write the events as .json files into the current working directory. (emphasis mine)
Long and short, follow the documentation to configure the data provider you'd like to use.
If you are going to map the audit table (Audit_Animals) to the same EF context as the audited Animals table, you can use the EntityFramework Data Provider included on the same Audit.EntityFramework library.
Check the documentation here:
Entity Framework Data Provider
If you plan to store the audit logs in
the same database as the audited entities, you can use the
EntityFrameworkDataProvider. Use this if you plan to store the audit
trails for each entity type in a table with similar structure.
There is another library that can audit EF contexts in a similar way, take a look: zzzprojects/EntityFramework-Plus.
Cannot recommend one over the other since they provide different features (and I'm the owner of the audit.net library).
Update:
.NET 6 and Entity Framework Core 6.0 supports SQL Server temporal tables out of the box.
See this answer for examples:
https://stackoverflow.com/a/70017768/3850405
Original:
You could have a look at Temporal tables (system-versioned temporal tables) if you are using SQL Server 2016< or Azure SQL.
https://learn.microsoft.com/en-us/sql/relational-databases/tables/temporal-tables?view=sql-server-ver15
From documentation:
Database feature that brings built-in support for providing
information about data stored in the table at any point in time rather
than only the data that is correct at the current moment in time.
Temporal is a database feature that was introduced in ANSI SQL 2011.
There is currently an open issue to support this out of the box:
https://github.com/dotnet/efcore/issues/4693
There are third party options available today but since they are not from Microsoft it is of course a risk that they won't be supported in future versions.
https://github.com/Adam-Langley/efcore-temporal-query
https://github.com/findulov/EntityFrameworkCore.TemporalTables
I solved it like this:
If you use the included Visual Studio 2019 LocalDB (Microsoft SQL Server 2016 (13.1.4001.0 LocalDB) you will need to upgrade if you use cascading DELETE or UPDATE. This is because Temporal tables with cascading actions is not supported in that version.
Complete guide for upgrading here:
https://stackoverflow.com/a/64210519/3850405
Start by adding a new empty migration. I prefer to use Package Manager Console (PMC):
Add-Migration "Temporal tables"
Should look like this:
public partial class Temporaltables : Migration
{
protected override void Up(MigrationBuilder migrationBuilder)
{
}
protected override void Down(MigrationBuilder migrationBuilder)
{
}
}
Then edit the migration like this:
public partial class Temporaltables : Migration
{
List<string> tablesToUpdate = new List<string>
{
"Images",
"Languages",
"Questions",
"Texts",
"Medias",
};
protected override void Up(MigrationBuilder migrationBuilder)
{
migrationBuilder.Sql($"CREATE SCHEMA History");
foreach (var table in tablesToUpdate)
{
string alterStatement = $#"ALTER TABLE [{table}] ADD SysStartTime datetime2(0) GENERATED ALWAYS AS ROW START HIDDEN
CONSTRAINT DF_{table}_SysStart DEFAULT GETDATE(), SysEndTime datetime2(0) GENERATED ALWAYS AS ROW END HIDDEN
CONSTRAINT DF_{table}_SysEnd DEFAULT CONVERT(datetime2 (0), '9999-12-31 23:59:59'),
PERIOD FOR SYSTEM_TIME (SysStartTime, SysEndTime)";
migrationBuilder.Sql(alterStatement);
alterStatement = $#"ALTER TABLE [{table}] SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = History.[{table}]));";
migrationBuilder.Sql(alterStatement);
}
}
protected override void Down(MigrationBuilder migrationBuilder)
{
foreach (var table in tablesToUpdate)
{
string alterStatement = $#"ALTER TABLE [{table}] SET (SYSTEM_VERSIONING = OFF);";
migrationBuilder.Sql(alterStatement);
alterStatement = $#"ALTER TABLE [{table}] DROP PERIOD FOR SYSTEM_TIME";
migrationBuilder.Sql(alterStatement);
alterStatement = $#"ALTER TABLE [{table}] DROP DF_{table}_SysStart, DF_{table}_SysEnd";
migrationBuilder.Sql(alterStatement);
alterStatement = $#"ALTER TABLE [{table}] DROP COLUMN SysStartTime, COLUMN SysEndTime";
migrationBuilder.Sql(alterStatement);
alterStatement = $#"DROP TABLE History.[{table}]";
migrationBuilder.Sql(alterStatement);
}
migrationBuilder.Sql($"DROP SCHEMA History");
}
}
tablesToUpdate should contain every table you need history for.
Then run Update-Database command.
Original source, a bit modified with escaping tables with square brackets etc:
https://intellitect.com/updating-sql-database-use-temporal-tables-entity-framework-migration/
Testing Create, Update and Delete will then show a complete history.
[HttpGet]
public async Task<ActionResult<string>> Test()
{
var identifier1 = "OATestar123";
var identifier2 = "OATestar12345";
var newQuestion = new Question()
{
Identifier = identifier1
};
_dbContext.Questions.Add(newQuestion);
await _dbContext.SaveChangesAsync();
var question = await _dbContext.Questions.FirstOrDefaultAsync(x => x.Identifier == identifier1);
question.Identifier = identifier2;
await _dbContext.SaveChangesAsync();
question = await _dbContext.Questions.FirstOrDefaultAsync(x => x.Identifier == identifier2);
_dbContext.Entry(question).State = EntityState.Deleted;
await _dbContext.SaveChangesAsync();
return Ok();
}
Tested a few times but the log will look like this:
This solution has a huge advantage IMAO that it is not Object Relational Mapper (ORM) specific and you will even get history if you write plain SQL.
The History tables are also read only by default so less chance of a corrupt audit trail. Error received: Cannot update rows in a temporal history table ''
If you need access to the data you can use your preferred ORM to fetch it or audit via SQL.

F# FSharp.Data.SqlClient not recognizing multiple return tables from Stored Procedure

I am not sure if this is possible but I have not been able to come across clear documentation for this use case. I am using F# 4 and the FSharp.Data.SqlClient library to connect to SQL Server 2016. I am wanting to call a stored procedure that returns multiple tables and turn those tables into the corresponding records. In this case the first table is made up of items and the second table is made up of customers.
My instinct is that it should look something like this:
let items, customers = cmd.Execute()
My gut is that items would be an IEnumerable<item> and customers would be an IEnumerable<customer> where item and customer are both Record types. What it appears is happening though is that FSharp.Data.SqlClient is only seeing the first returned table from the stored procedure. I am working on a SQL Server 2016 Developer instance. Here is the T-SQL to setup the example:
create table Item (
ItemID int identity(1, 1) primary key,
ItemName nvarchar(50)
)
go
create table Customer (
CustomerID int identity(1, 1) primary key,
CustomerName nvarchar(50)
)
go
insert into Item (ItemName) values ('A');
insert into Item (ItemName) values ('B');
insert into Item (ItemName) values ('C');
insert into Customer (CustomerName) values ('Gary');
insert into Customer (CustomerName) values ('Sergei');
insert into Customer (CustomerName) values ('Elise');
go
create procedure dbo.ExampleProcedure
as
begin
set nocount on;
select
ItemID,
ItemName
from Item
select
CustomerID,
CustomerName
from Customer
end;
And here is the F# script that I am testing with. It shows what I would like to be able to do but I get a compile error on the last line:
#r "../packages/FSharp.Data.SqlClient.1.8.2/lib/net40/FSharp.Data.SqlClient.dll"
#r "../packages/FSharp.Data.2.3.2/lib/net40/FSharp.Data.dll"
#r "System.Xml.Linq.dll"
open FSharp.Data
[<Literal>]
let connStr =
"Data Source=**connection string**;"
type queryExample = SqlProgrammabilityProvider<connStr>
do
use cmd = new queryExample.dbo.ExampleProcedure(connStr)
let items, customers = cmd.Execute()
I am wanting items to correspond to the first returned table and customers to correspond to the second returned table. The intellisense suggests that FSharp.Data.SqlClient is only seeing the first table. When I hover over cmd.Execute() the popup says "This expression was expected to have type 'a*'b but here has type System.Collections.Generic.IEnumerable<SqlProgrammabilityProvider<...>.dbo.ExampleProcedure.Record>". If I do the following I get access to the Items query in the stored procedure:
// Learn more about F# at http://fsharp.org. See the 'F# Tutorial' project
// for more guidance on F# programming.
#r "../packages/FSharp.Data.SqlClient.1.8.2/lib/net40/FSharp.Data.SqlClient.dll"
#r "../packages/FSharp.Data.2.3.2/lib/net40/FSharp.Data.dll"
#r "System.Xml.Linq.dll"
open FSharp.Data
[<Literal>]
let connStr =
"Data Source=**connection string**;"
type queryExample = SqlProgrammabilityProvider<connStr>
do
use cmd = new queryExample.dbo.ExampleProcedure(connStr)
for item in cmd.Execute() do
printfn "%A" item.ItemID
Is this even possible? Is my approach wrong? I could not find clear documentation on this use case but I thought it would be common enough it would be covered.
Update
Just to clarify what I am trying to achieve I am showing how I solve this in C#. In C# I create a DataSet object and populate it with the results of the Stored Procedure. From there I pick out the individual tables to work with. After extracting the tables I then use LINQ to transform the rows into the corresponding objects. It often looks something like the following:
using System.Data;
using System.Data.SqlClient;
var connStr = "**connection string**"
var sqlConnection = new SqlConnection(connStr );
var sqlCommand = new SqlCommand("ExampleProcedure", sqlConnection);
sqlCommand.CommandType = CommandType.StoredProcedure;
var dataSet = new DataSet();
var adapter = new SqlDataAdapter(sqlCommand);
adapter.Fill(dataSet);
var itemsTable = dataSet.Tables[0];
// Turn the itemsTable into a List<Item> using LINQ here
var customersTable = dataSet.Tables[1];
// Turn the customersTable into List<Customer> using LINQ here
I find this to be overly verbose for such a simple thing as extracting the individual tables but perhaps I am too sensitive to code clutter. I know that F# must have a more elegant and terse way to express this.
I don't know F#, however this is a data access problem.
When a stored procedure returns multiple resultsets, you need to access they in sequence, one by one.
cmd.ExecuteReader() returns an instance of a datareader pointing to the first resultset. You need to process this resultset, may be filling a list with instances of a custom class, than you call the method "NextResult" and you will have access to the next resultset and so on.
A reference for the method "NextResult": https://msdn.microsoft.com/pt-br/library/system.data.sqlclient.sqldatareader.nextresult(v=vs.110).aspx

SQL Server 2016 SSIS get cursor from stored procedure

I am using SQL Server 2016.
I have a stored procedure GET_RECORDS that takes input parameters for filter and outputs a CURSOR parameter
I want to get this cursor in my SSIS package
I had created data flow task, OleDb source and variables for parameter values. Then mapped parameters
Params mapping screen
but when I wanted to save the component - I got an error
error screen
I tried to add clause WITH RESULT SETS with some dummy columns, but my procedure doesn't return any result set
What am I doing wrong?
Any advices will be helpful.
Thank you.
With regards, Yuriy.
The source component is trying to determine what columns and types will be returned. Because you are using dynamic SQL the metadata can change each time you run it.
With result sets allows you to define the data being returned but should only be used if you are guaranteed to have those results every time you execute.
EDIT:
I create a connection and run the command so that it populates a data table. Then I put the column headers into a string array. There are plenty of examples out there.
Then I use the following function to create a destination table. Finally I create a datareader and pass that to the .Net SqlBulkCopy. Hope this helps.
private void CreateTable(string TableName, string[] Fields)
{
if (TableExists(TableName) && Overwrite)
{
SqlCommand = new SqlCommand($"Drop Table [{TableName}]", SqlConnection);
SqlCommand.ExecuteNonQuery();
}
string Sql = $"Create Table [{TableName}] (";
int ColumnNumber = 1;
foreach (string Field in Fields)
{
string FieldValue = Field;
if (! HasHeaders)
{
FieldValue = "Column" + ColumnNumber;
ColumnNumber++;
}
Sql += $"[{FieldValue}] Varchar(8000),";
}
Sql = Sql + "ImportFileID Int, ID Int Identity(1,1) Not Null, Constraint [PK_" + TableName + "] Primary Key Clustered ([ID] Asc))";
SqlCommand = new SqlCommand(Sql, SqlConnection);
SqlCommand.ExecuteNonQuery();
}
Use ado.net source instead of oledb source, define a simple select and get the columns you wish to return. Now you can define expresión in the dataflow properties.
Search ado.net source dynamic sql
:)
try to return the records and use foreach in ETL instead of cursor
https://www.simple-talk.com/sql/ssis/implementing-foreach-looping-logic-in-ssis/
I think you can do it from a simple way, but I don't know what you are you doing, exactly...

Correct method of deleting over 2100 rows (by ID) with Dapper

I am trying to use Dapper support my data access for my server app.
My server app has another application that drops records into my database at a rate of 400 per minute.
My app pulls them out in batches, processes them, and then deletes them from the database.
Since data continues to flow into the database while I am processing, I don't have a good way to say delete from myTable where allProcessed = true.
However, I do know the PK value of the rows to delete. So I want to do a delete from myTable where Id in #listToDelete
Problem is that if my server goes down for even 6 mintues, then I have over 2100 rows to delete.
Since Dapper takes my #listToDelete and turns each one into a parameter, my call to delete fails. (Causing my data purging to get even further behind.)
What is the best way to deal with this in Dapper?
NOTES:
I have looked at Tabled Valued Parameters but from what I can see, they are not very performant. This piece of my architecture is the bottle neck of my system and I need to be very very fast.
One option is to create a temp table on the server and then use the bulk load facility to upload all the IDs into that table at once. Then use a join, EXISTS or IN clause to delete only the records that you uploaded into your temp table.
Bulk loads are a well-optimized path in SQL Server and it should be very fast.
For example:
Execute the statement CREATE TABLE #RowsToDelete(ID INT PRIMARY KEY)
Use a bulk load to insert keys into #RowsToDelete
Execute DELETE FROM myTable where Id IN (SELECT ID FROM #RowsToDelete)
Execute DROP TABLE #RowsToDelte (the table will also be automatically dropped if you close the session)
(Assuming Dapper) code example:
conn.Open();
var columnName = "ID";
conn.Execute(string.Format("CREATE TABLE #{0}s({0} INT PRIMARY KEY)", columnName));
using (var bulkCopy = new SqlBulkCopy(conn))
{
bulkCopy.BatchSize = ids.Count;
bulkCopy.DestinationTableName = string.Format("#{0}s", columnName);
var table = new DataTable();
table.Columns.Add(columnName, typeof (int));
bulkCopy.ColumnMappings.Add(columnName, columnName);
foreach (var id in ids)
{
table.Rows.Add(id);
}
bulkCopy.WriteToServer(table);
}
//or do other things with your table instead of deleting here
conn.Execute(string.Format(#"DELETE FROM myTable where Id IN
(SELECT {0} FROM #{0}s", columnName));
conn.Execute(string.Format("DROP TABLE #{0}s", columnName));
To get this code working, I went dark side.
Since Dapper makes my list into parameters. And SQL Server can't handle a lot of parameters. (I have never needed even double digit parameters before). I had to go with Dynamic SQL.
So here was my solution:
string listOfIdsJoined = "("+String.Join(",", listOfIds.ToArray())+")";
connection.Execute("delete from myTable where Id in " + listOfIdsJoined);
Before everyone grabs the their torches and pitchforks, let me explain.
This code runs on a server whose only input is a data feed from a Mainframe system.
The list I am dynamically creating is a list of longs/bigints.
The longs/bigints are from an Identity column.
I know constructing dynamic SQL is bad juju, but in this case, I just can't see how it leads to a security risk.
Dapper request the List of object having parameter as a property so in above case a list of object having Id as property will work.
connection.Execute("delete from myTable where Id in (#Id)", listOfIds.AsEnumerable().Select(i=> new { Id = i }).ToList());
This will work.

Resources