TooManyRowsAffectedException with encrypted triggers - sql-server

I'm using nHibernate to update 2 columns in a table that has 3 encrypted triggers on it. The triggers are not owned by me and I can not make changes to them, so unfortunately I can't SET NOCOUNT ON inside of them.
Is there another way to get around the TooManyRowsAffectedException that is thrown on commit?
Update 1
So far only way I've gotten around the issue is to step around the .Save routine with
var query = session.CreateSQLQuery("update Orders set Notes = :Notes, Status = :Status where OrderId = :Order");
query.SetString("Notes", orderHeader.Notes);
query.SetString("Status", orderHeader.OrderStatus);
query.SetInt32("Order", orderHeader.OrderHeaderId);
query.ExecuteUpdate();
It feels dirty and is not easily to extend, but it doesn't crater.

We had the same problem with a 3rd party Sybase database. Fortunately, after some digging into the NHibernate code and brief discussion with the developers, it seems that there is a straightforward solution that doesn't require changes to NHibernate. The solution is given by Fabio Maulo in this thread in the NHibernate developer group.
To implement this for Sybase we created our own implementation of IBatcherFactory, inherited from NonBatchingBatcher and overrode the AddToBatch() method to remove the call to VerifyOutcomeNonBatched() on the provided IExpectation object:
public class NonVerifyingBatcherFactory : IBatcherFactory
{
public virtual IBatcher CreateBatcher(ConnectionManager connectionManager, IInterceptor interceptor)
{
return new NonBatchingBatcherWithoutVerification(connectionManager, interceptor);
}
}
public class NonBatchingBatcherWithoutVerification : NonBatchingBatcher
{
public NonBatchingBatcherWithoutVerification(ConnectionManager connectionManager, IInterceptor interceptor) : base(connectionManager, interceptor)
{}
public override void AddToBatch(IExpectation expectation)
{
IDbCommand cmd = CurrentCommand;
ExecuteNonQuery(cmd);
// Removed the following line
//expectation.VerifyOutcomeNonBatched(rowCount, cmd);
}
}
To do the same for SQL Server you would need to inherit from SqlClientBatchingBatcher, override DoExectuteBatch() and remove the call to VerifyOutcomeBatched() from the Expectations object:
public class NonBatchingBatcherWithoutVerification : SqlClientBatchingBatcher
{
public NonBatchingBatcherWithoutVerification(ConnectionManager connectionManager, IInterceptor interceptor) : base(connectionManager, interceptor)
{}
protected override void DoExecuteBatch(IDbCommand ps)
{
log.DebugFormat("Executing batch");
CheckReaders();
Prepare(currentBatch.BatchCommand);
if (Factory.Settings.SqlStatementLogger.IsDebugEnabled)
{
Factory.Settings.SqlStatementLogger.LogBatchCommand(currentBatchCommandsLog.ToString());
currentBatchCommandsLog = new StringBuilder().AppendLine("Batch commands:");
}
int rowsAffected = currentBatch.ExecuteNonQuery();
// Removed the following line
//Expectations.VerifyOutcomeBatched(totalExpectedRowsAffected, rowsAffected);
currentBatch.Dispose();
totalExpectedRowsAffected = 0;
currentBatch = new SqlClientSqlCommandSet();
}
}
Now you need to inject your new classes into NHibernate. There are at two ways to do this that I am aware of:
Provide the name of your IBatcherFactory implementation in the adonet.factory_class configuration property
Create a custom driver that implements the IEmbeddedBatcherFactoryProvider interface
Given that we already had a custom driver in our project to work around Sybase 12 ANSI string problems it was a straightforward change to implement the interface as follows:
public class DriverWithCustomBatcherFactory : SybaseAdoNet12ClientDriver, IEmbeddedBatcherFactoryProvider
{
public Type BatcherFactoryClass
{
get { return typeof(NonVerifyingBatcherFactory); }
}
//...other driver code for our project...
}
The driver can be configured by providing the driver name using the connection.driver_class configuration property. We wanted to use Fluent NHibernate and it can be done using Fluent as follows:
public class SybaseConfiguration : PersistenceConfiguration<SybaseConfiguration, SybaseConnectionStringBuilder>
{
SybaseConfiguration()
{
Driver<DriverWithCustomBatcherFactory>();
AdoNetBatchSize(1); // This is required to use our new batcher
}
/// <summary>
/// The dialect to use
/// </summary>
public static SybaseConfiguration SybaseDialect
{
get
{
return new SybaseConfiguration()
.Dialect<SybaseAdoNet12Dialect>();
}
}
}
and when creating the session factory we use this new class as follows:
var sf = Fluently.Configure()
.Database(SybaseConfiguration.SybaseDialect.ConnectionString(_connectionString))
.Mappings(m => m.FluentMappings.AddFromAssemblyOf<MyEntity>())
.BuildSessionFactory();
Finally you need to set the adonet.batch_size property to 1 to ensure that your new batcher class is used. In Fluent NHibernate this is done using the AdoNetBatchSize() method in a class that inherits from PersistenceConfiguration (see the SybaseConfiguration class constructor above for an example of this).

er... you might be able to decrypt them...
Edit: if you can't change code, decrypt, or disable then you have no code options on the SQL Server side.
However, You could try "disallow results from triggers Option" which is OK for SQL 2005 and SQL 2008 but will be removed in later versions. I don't know if it suppresses rowcount messages though.

Setting the "Disallow Results from Triggers" option to 1 worked for us (the default is 0).
Note that this option will not be available in a future releases of Microsoft SQL Server, but after it is no longer available it will behave as if it was set to 1. So setting this to 1 now fixes the problem and also give you the same behavior as will be in future releases.

Related

How do I write EF.Functions extension method?

I see that EF Core 2 has EF.Functions property EF Core 2.0 Announcement which can be used by EF Core or providers to define methods that map to database functions or operators so that those can be invoked in LINQ queries. It included LIKE method that gets sent to the database.
But I need a different method, SOUNDEX() that is not included. How do I write such a method that passes the function to the database the way DbFunction attribute did in EF6? Or I need to wait for MS to implement it? Essentially, I need to generate something like
SELECT * FROM Customer WHERE SOUNDEX(lastname) = SOUNDEX(#param)
Adding new scalar method to EF.Functions is easy - you simply define extension method on DbFunctions class. However providing SQL translation is hard and requires digging into EFC internals.
However EFC 2.0 also introduces a much simpler approach, explained in Database scalar function mapping section of the New features in EF Core 2.0 documentation topic.
According to that, the easiest would be to add a static method to your DbContext derived class and mark it with DbFunction attribute. E.g.
public class MyDbContext : DbContext
{
// ...
[DbFunction("SOUNDEX")]
public static string Soundex(string s) => throw new Exception();
}
and use something like this:
string param = ...;
MyDbContext db = ...;
var query = db.Customers
.Where(e => MyDbContext.Soundex(e.LastName) == MyDbContext.Soundex(param));
You can declare such static methods in a different class, but then you need to manually register them using HasDbFunction fluent API.
EFC 3.0 has changed this process a little, as per https://learn.microsoft.com/en-us/ef/core/what-is-new/ef-core-3.0/breaking-changes#udf-empty-string
Example of adding CHARINDEX in a partial context class:
public partial class MyDbContext
{
[DbFunction("CHARINDEX")]
public static int? CharIndex(string toSearch, string target) => throw new Exception();
partial void OnModelCreatingPartial(
ModelBuilder modelBuilder)
{
modelBuilder
.HasDbFunction(typeof(MyDbContext).GetMethod(nameof(CharIndex)))
.HasTranslation(
args =>
SqlFunctionExpression.Create("CHARINDEX", args, typeof(int?), null));
}
}

How do I map a column to uppercase in .NET 4.5 C# Entity Framework 6 using both Oracle and SQL Server?

I'm using C#, .NET 4.5 and Entity Framework 6 in my project. It uses both Oracle and SQL Server, depending on the installation at the client.
The approach is database-first, as this database existed already by the time we decided to change the ORM from NHibernate to Entity Framework 6.
The mapping looks like this:
ToTable(schema + ".Motorista");
Property(x => x.Criacao).HasColumnName("criacao").IsOptional();
The table and column names are all in PascalCase in the mapping, which works fine with SQL Server but, in Oracle, all the names are UpperCase which causes an error:
ORA-00942: table or view does not exist
If I manually make it uppercase, then it works fine on Oracle. But I can't do that because of compatibility to SQL Server.
How can I say to Entity Framework to uppercase all the names when using Oracle?
Can I use conventions in this scenario?
When the database names (tables and columns) are equal to the class and property names in the class model it's very easy to introduce custom code-first conventions:
In the context's OnModelCreating overload you can add these lines to add conventions how table and column names will be derived from the class and property names, respectively:
modelBuilder.Types().Configure
(c => c.ToTable(c.ClrType.Name.ToUpper(), schema));
modelBuilder.Properties().Configure
(c => c.HasColumnName(c.ClrPropertyInfo.Name.ToUpper()));
Of course you should do this conditionally, i.e. when connecting to Oracle. For instance by checking a global constant like OnOracle that you could set by
ConfigurationManager.ConnectionStrings[0].ProviderName
== "System.Data.OracleClient"
on application start up.
Check the providerName attribute in the named connection string to see if your connection is for SQL Server or Oracle (OR add a redundant value in the appSettings section of the configuration). Then do what #AaronLS suggested and add a helper method to case your names correctly and apply any additional formatting. The helper method should be tasked with checking the database type as mentioned above and applying or not applying casing/formatting.
Here is an example.
public class MyDbContext : DbContext
{
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Configurations.Add(new SomeMappedTypeMapper());
base.OnModelCreating(modelBuilder);
}
}
public class SomeMappedType
{
public int SomeMappedColumnId { get; set; }
public string SomeMappedColumn { get; set; }
}
public class SomeMappedTypeMapper : EntityTypeConfiguration<SomeMappedType>
{
public SomeMappedTypeMapper()
{
this.HasKey(x => x.SomeMappedColumnId);
this.ToTable("SomeMappedType"); // If needed, apply the same technique as used in the column name extension
this.Property(x => x.SomeMappedColumnId).HasColumnNameV2("SomeMappedColumnId").HasDatabaseGeneratedOption(DatabaseGeneratedOption.Identity);
this.Property(x => x.SomeMappedColumn).HasColumnNameV2("SomeMappedColumn");
}
}
public static class TypeHelper
{
private static bool isOracle;
static TypeHelper()
{
isOracle = System.Configuration.ConfigurationManager.ConnectionStrings["yourDbConnectionName"].ProviderName.IndexOf("oracle", StringComparison.OrdinalIgnoreCase) >= 0;
}
public static PrimitivePropertyConfiguration HasColumnNameV2(this PrimitivePropertyConfiguration property, string columnName)
{
if (isOracle)
return property.HasColumnName(columnName.ToUpper());
return property.HasColumnName(columnName);
}
}
This link is in EF CORE but it may help you, this converts ToUpper, but you can change ToLower, you can also use the Nuget ** Humanizer ** for another type of capitalize.
Import that file into your project and use it like this.
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
base.OnModelCreating(modelBuilder);
modelBuilder.ToUpperCaseTables();
modelBuilder.ToUpperCaseColumns();
// ...
}
Consider a table called "Person" with a column called "Name" in SQL Server but in oracle the table is called "PERSON" with a column called "NAME".
We were able to use our models generated against sql server on our oracle database by adding the following code to the DBContext classe's OnModelCreating
modelBuilder.Entity<Person>()
.HasEntitySetName("Person")
.ToTable("PERSON");
modelBuilder.Entity<Person>()
.Property(t => t.Name)
.HasColumnName("NAME");

How to switch between DatabaseGeneratedOption.Identity, Computed and None at runtime without having to generate empty DbMigrations

I am migrating a legacy database to a new database which we need to access and "manage" (as oxymoronic as it might sound) primarily through Entity Framework Code-First.
We are using MS SQL Server 2014.
The legacy database contained some tables with computed columns. Typical GUID and DateTime stuff.
Technically speaking, these columns did not have a computed column specification, but rather where given a default value with NEWID() and GETDATE()
We all know that it is very easy to configure the DbContext to deal with those properties as follows:
modelBuilder.Entity<Foo>()
.Property(t => t.Guid)
.HasDatabaseGeneratedOption(DatabaseGeneratedOption.Computed);
modelBuilder.Entity<Bar>()
.Property(t => t.DTS)
.HasDatabaseGeneratedOption(DatabaseGeneratedOption.Computed);
The above would instruct the Entity Framework to ignore submitting any supplied values for such properties during INSERTs and UPDATEs.
But now we need to allow for import of legacy records and maintain the OLD values, including the PRIMARY KEY, which is marked as IDENTITY
This means we would have to set the Id, Guid and DTS properties to DatabaseGeneratedOption.None while inserting those records.
For the case of Id, we would have to somehow execute SET IDENTITY_INSERT ... ON/OFF within the connection session.
And we want to do this
importing process via Code-First as well.
If I modify the model and "temporarily" and set those properties to DatabaseGeneratedOption.None after the database has been created, we would get the typical:
The model backing the context has changed since the database was created. Consider using Code First Migrations to update the database.
I understand that we could generate an empty coded-migration with -IgnoreChanges so as to "establish" this latest version of the context, but this wouldn't be an acceptable strategy as we would have to be run empty migrations back-and-forth solely for this purpose.
Half an answer:
We have considered giving these properties nullable types, i.e.
public class Foo
{
...
public Guid? Guid { get; set; }
}
public class Bar
{
...
public DateTime? DTS { get; set; }
}
While caring about the default values in an initial DbMigration:
CreateTable(
"dbo.Foos",
c => new
{
Id = c.Int(nullable: false, identity: true),
Guid = c.Guid(nullable: false, defaultValueSql: "NEWID()"),
})
.PrimaryKey(t => t.Id);
CreateTable(
"dbo.Bars",
c => new
{
Id = c.Int(nullable: false, identity: true),
DTS = c.Guid(nullable: false, defaultValueSql: "GETDATE()"),
})
.PrimaryKey(t => t.Id);
The Question:
But the question remains: Is there a way to switch between DatabaseGeneratedOption.Identity, DatabaseGeneratedOption.Computed and DatabaseGeneratedOption.None at runtime?
At the very least, how could we turn DatabaseGeneratedOption.Identity on/off at runtime?
A certain amount of the configuration of the context is always going to be dependent on the runtime environment - for example, proxy generation and validation. As such, runtime configuration of the Entity Framework DbContext is something I leverage quite heavily.
Although I've never used this approach to switch the configuration of the context on a per use-case basis, I see no reason why this would not work.
In its simplest form, this can be achieved by having a set of EntityTypeConfiguration classes for each environment. Each configuration set is then wired to the DbContext on a per-environment basis. Again, in its simplest form this could be achieved by having a DbContext type per environment. In your case, this would be per use-case.
Less naively, I usually encapsulate the configuration of the context in an environment-specific unit of work. For example, the unit of work for an Asp.Net environment has an underlying DbContext configured to delegate validation to the web framework, as well as to turn off proxy generation to prevent serialisation issues. I imagine this approach would have similar usefulness to your problem.
For example (using brute force code):
// Foo Configuration which enforces computed columns
public class FooConfiguration : EntityTypeConfiguration<Foo>
{
public FooConfiguration()
{
Property(p => p.DateTime).HasDatabaseGeneratedOption(DatabaseGeneratedOption.Computed);
Property(p => p.Guid).HasDatabaseGeneratedOption(DatabaseGeneratedOption.Computed);
}
}
// Foo configuration that allows computed columns to be overridden
public class FooConfiguration2 : EntityTypeConfiguration<Foo>
{
public FooConfiguration2()
{
Property(p => p.DateTime).HasDatabaseGeneratedOption(DatabaseGeneratedOption.None);
Property(p => p.Guid).HasDatabaseGeneratedOption(DatabaseGeneratedOption.None);
}
}
// DbContext that enforces computed columns
public class MyContext : DbContext
{
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Configurations.Add(new FooConfiguration());
}
}
// DbContext that allows computed columns to be overridden
public class MyContext2 : DbContext
{
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Configurations.Add(new FooConfiguration2());
}
}
This can obviously be tidied up - we usually use a combination of factory and strategy patterns to encapsulate the creation of a runtime specific context. In combination with a DI container this allows the correct set up configuration classes to be injected on a per-environment basis.
Example usage:
[Fact]
public void CanConfigureContextAtRuntime()
{
// Enforce computed columns
using (var context = new EfContext())
{
var foo1 = new Foo();
context.Foos.Add(foo1);
context.SaveChanges();
}
// Allow overridden computed columns
using (var context = new EfContext2())
{
var foo2 = new Foo { DateTime = DateTime.Now.AddYears(-3) };
context.Foos.Add(foo2);
context.SaveChanges();
}
// etc
}

SQL Server Session Serialization in ASP.Net MVC

I am new to ASP.Net MVC . Any help is greatly appreciated in resolving my problem.
I am using a LINQToSQL db in my MVC application. For one of the auto generated partial class (Example MyClass assume for table MyClass) , I created another Partial class as MyClass and added DataAnnotations Like following...
namespcae NP
{
[MetadaType(typeof(myData))]
[Serializable()]
public partial class MyClass
{
}
public myData
{
[Required]
public string ID { get ; set ;}
// Other properties are listed here
}
}
In my controller class example MyHomeController
I have a code as follows:
List<MyClass> list = new List<MyClass>();
list = dbContext.StoredProcedure(null).ToList<MyClass>()
session["data"] = list.
above code works fine if I use inProc session state. But if I use SQLServer mode then I get error as
"Unable to serialize the session state. In 'StateServer' and
'SQLServer' mode, ASP.NET will serialize the session state objects,
and as a result non-serializable objects or MarshalByRef objects are
not permitted. The same restriction applies if similar serialization
is done by the custom session state store in 'Custom' mode. "
Can anyone tell me what I am doing wrong here..?. I can see the data is getting populated in ASPState database tables. By application throws error as follows.
Just mark as Serializable all classes whose instances you want to store in Session.
Finally I was able to resolve the issue.
Solution:
Add the below statement before querying the database. In my case I was calling LinqToSQl context( dbContext).
dbContext.ObjectTrackingEnabled = false;
Sample Code:
List empList = new List();
dbContext.ObjectTrackingEnabled = false;
empList = dbContext.SomeStoredProcedure().ToList()
Session["employee"] = empList.

Using stored procedures (Linq-to-SQL, not EF) in WCF RIA - Silverlight 4

For the love of heaven and earth I really wish someone could help me out with this issue. It seems everyone has something to say about EF but nothing about Linq-to-SQL.
I am trying to grab some data from my table via a stored procedure, believe me, that's all.
I added the Linq-to-SQL model (LAMP.dbml)
added the stored procedure (getAffectedParcel) from the server explorer. getAffectedParcel takes 2 strings as parameters
Build the application.
Added a domain service class (LAMPService)
Selected the (LAMPDataContext) as the data context class (normally I would tick generate metadata, but since I am not working with tables it's not enabled for ticking)
Added the following function to the LAMPService.cs:
public IEnumerable < getAffectedParcelResult > GetTheAffectedParcels(String v, String vf)
{
return this.DataContext.getAffectedParcel(v, vf).AsEnumerable();
}
Added the following code to a Silverlight page in an attempt to consume the stored procedure:
LAMPContext db = new LAMPContext();
try
{
var q = db.GetTheAffectedParcels("18606004005", "").Value;
foreach (getAffectedParcelResult GAP in q)
{
MessageBox.Show(GAP.Owner);
}
}
catch (Exception ex)
{
MessageBox.Show (ex.Message.ToString());
}
Build and run application. An error occurs stating:
Object reference not set to an instance of an object.
I have tried ~1000,000 ways to see if this thing would work, but to no avail. Please don't tell me to use Entity Framework, I want to use Linq-to-SQL. Can someone (anyone) help me out here.
//houdini
Calling a stored procedure from the Silverlight client happens in the Async world. Let's consider an example from the AdventureWorks database...
Here's what the Domain Service method looks like. It is calling the EF on a stored procedure in the database called 'BillOfMaterials'.
public IQueryable<BillOfMaterial> GetBillOfMaterials()
{
return this.ObjectContext.BillOfMaterials;
}
Back on the client side, here is the code for setting up the call...
public GetSp()
{
InitializeComponent();
DomainService1 ds1 = new DomainService1();
var lo = ds1.Load(ds1.GetBillOfMaterialsQuery());
lo.Completed += LoCompleted;
}
First, the Domain Service is created, and then it is used to load the results of the stored procedure. In this particular case, the result of this is an instance of 'LoadOperation'. These things are async, so the LoadOperation needs to have a callback for when it is finished. The callback code looks like this...
public ObservableCollection<BillOfMaterial> MyList { get; set; }
void LoCompleted(object sender, EventArgs e)
{
LoadOperation lo = sender as LoadOperation;
if(lo!=null)
{
MyList = new ObservableCollection<BillOfMaterial>();
foreach(BillOfMaterial bi in lo.AllEntities)
{
MyList.Add(bi);
}
dataGrid1.ItemsSource = MyList;
}
}
In this method, the 'sender' is dereferenced into the LoadOperation instance, and then all the goodies from the database can be accessed. In this trivial example, a list is built and passed to DataGrid as the ItemsSource. It's good for understanding, but you would probably do something else in practice.
That should solve your problem. :)
The best advice I can give on Silverlight and RIA is never do ANYTHING on your own until you have tried it in AdventureWorks. You will just waste your time and beat your head against the wall.
Firstly, it seems like your DomainService code is written for Invoke() rather than Query(). You should use Query as it enables you to update data back to the server.
Solution: you should add a [Query] attribute to GetTheAffectedParcels on the domain service.
[Query]
public IQueryable<Parcel>
GetTheAffectedParcels(string ParcelNumber, string LotNumber)
{
// etc.
}
Secondly, RIA Services needs to know which is the primary key on the Parcel class.
Solution: Apply a MetadataType attribute to the Parcel class, which allows you to add metadata to the Parcel class indirectly, since it is generated by Linq2Sql and you couldn't add annotations directly to the ParcelId - it'd get wiped away.
[MetadataType(typeof(ParcelMetadata)]
public partial class Parcel
{
}
public class ParcelMetadata
{
[System.ComponentModel.DataAnnotations.Key]
public int ParcelId {get; set; }
}
Thirdly, modify your client like this. Instead try this on the Silverlight client:
LAMPContext db = new LAMPContext();
try
{
var q = db.GetTheAffectedParcelsQuery("18606004005", "");
db.Load(q, (op) =>
{
if (op.HasError)
{
label1.Text = op.Error.Message;
op.MarkErrorAsHandled();
}
else
{
foreach (var parcel in op.Entities)
{
// your code here
}
}
}
}
catch (Exception ex)
{
label1.Text = op.ex.Message;
}
Much thanks to Chui and Garry who practically kicked me in the right direction :) [thanks guys...ouch]
This is the procedure I finally undertook:
-After adding the data model(LINQ2SQL) and the domain service, I created a partial class [as suggested by Chui] and included the following metadata info therein:
[MetadataTypeAttribute(typeof(getAffectedParcelResult.getAffectedParcelResultMetadata))]
public partial class getAffectedParcelResult
{
internal sealed class getAffectedParcelResultMetadata
{
[Key]
public string PENumber { get; set; }
}
}
Then, Adjusted the Domain Service to include the following:
[Query]
public IQueryable<getAffectedParcelResult> GetTheAffectedParcels(string v, string vf)
{
// IEnumerable<getAffectedParcelResult> ap = this.DataContext.getAffectedParcel(v, vf);
return this.DataContext.getAffectedParcel(v, vf).AsQueryable();
}
Then Build the app, afterwhich the getAffectedParcelResult store procedure appeared in the Data Sources panel. I wanted to access this via code however. Therefore, I accessed it in silverlight [.xaml page] via the following:
LAMPContext db = new LAMPContext();
var q = db.GetTheAffectedParcelsQuery("18606004005", "");
db.Load(q, (op) =>
{
if (op.HasError)
{
MessageBox.Show(op.Error.Message);
op.MarkErrorAsHandled();
}
else
{
foreach (getAffectedParcelResult gap in op.Entities)
{
ownerTextBlock.Text = gap.Owner.ToString();
}
}
},false);
This worked nicely. The thing is, my stored procedure returns a complex type so to speak. As of such, it was not possible to map it to any particular entity.
Oh and by the way this article helped out as well:
http://onmick.com/Home/tabid/154/articleType/ArticleView/articleId/2/Pulling-Data-from-Stored-Procedures-in-WCF-RIA-Services-for-Silverlight.aspx

Resources