Updating/inserting into a table with Always Encrypted columns using EF Core 5 - sql-server

I'm having trouble using Entity Framework Core 5 with the "Always Encrypted" feature in a ASP.NET Core 5 API. I've configured an Azure Key Vault and updated the connection string as necessary. I can read encrypted column data successfully, with code like this:
await using var context = new RcContext();
Company c = await context.Companies.FindAsync(id);
where the Companies table has an encrypted column. The encrypted column is defined in the database as datatype varchar(16) and is returned as plain text in a string member of the entity.
However, trying to update a company or insert new companies using context.SaveChanges() is failing. I get the error
SqlException: Operand type clash: nvarchar(4000) encrypted with ... is incompatible with varchar(16) encrypted with ...
Some suggestions for solving this point to using SqlCommand from SqlClient or stored procedures, or increasing the column's size in the database to nvarchar(max).
Is EF Core not capable of using the normal SaveChanges() pattern to make updates to data in a SQL Server with Always Encrypted columns? How do I make this work with EF Core?

With Always Encrypted, the SQL Client needs to know the size of the columns so it can do the encryption on the client. So columns must be attributed like:
[Column(TypeName = "varchar(16)")] public string PaymentCreditCard { get; set; }
I only had to attribute the encrypted columns, not every column. Our source base had not used data annotations prior to this effort, and it wasn't clear to me that they are required for Always Encrypted to work.

Related

REPO Db type which works with SQL Server as well as SQLite

I have an application that supports two databases. MSSQL and SQLite. I am revamping the underlying data access and models and using RepoDb. I would be using the same model for the SQLite and MSSQL. Depending on the connection string I create my connection object (i.e. SQLiteConnection or SqlConnection). I am facing a problem with one of my entities. The problem is with a column type.
public class PLANT
{
public string OP_ID {get;set;}
}
The OP_ID in the SQL Server maps to a uniqueidentifier, and in SQLite to nvarchar. Where I try to do it, it works fine with SQLiteConnection. The problem I face is when I use SqlConnection
var plant = connection.Query<PLANT>(e => e.PL_ID == "3FFA25B5-4DF5-4216-846C-2C9F58B7DD90").FirstOrDefault();
I get error
“No coercion operator is defined between types 'System.Guid' and 'System.String “
I have tried using the IPropertyHandler<Guid, string> on the OP_ID; it works for SqlConnection but fails for SQLiteConnection.
Is there a way that I can use the same model for both connections?
I strongly recommend that you share the models between multiple databases if the PK is on the same type, otherwise you will be ending up some coerce problem like this due to the fact that one DB does not support that target type (i.e. UNIQUEIDENTIFIER).
In anyway, a PropertyHandler is a not way for this as the input types is different. You can use separate models for your SQLite and SqlServer, otherwise, you can explicitly set the RepoDb.Converter.ConversionType = Automatic so the coerce will automatically be handled.
I am not recommending the CoversionType to Automatic as it is an additional conversion logic on top of data extraction. But that would fix it.

Transfer to the SQL Server database the same _id as the mongodb when I export object

I work with a mean stack: node, express, angular, mongodb.
The project with module to orders.
The customer selects a product, with specifications, adds to the card, give the order: choose among payment method, finalizing.
During exports.createOrder = function (req, res), creating a record to the database mongo with its unique _id.
I need this data also save to a SQL Server database.
That's why at this point, I create additionally a connection to SQL Server using mssql library and assigns variables sql.Request() .input('Order_ID'), sql.NVarChar (255) order._id) and others.
Call out at this point, the procedure .execute ('AddOrder'), located on the SQL Server which adds data to the table orders.
The problem is that _id saved object is different in the two databases. The other variables are the same.
At the time of updating the order in the SQL Server database creates a new row with _id, now the same as in mongo, but does not overwrite my old row - logical because the update when finds orderID.
All data record are saved. Except _id which is overwritten.
In the mongo database for example I have: 586a8871a14d27e81a55533d but in mssql is 586a8871a14d27e81a55533f.
Mongo creates new _id. exactly it is changed 3-byte counter.
I mean accurate data in both databases, but rather some data needs to save besides mongo to sql.
a secondary identifier in mssql not need / not used.
Is there a way to pass the same object _id to the SQL Server database, when you make order so as to mongo not created at the moment of the new _id?
And why, for example, if I set auto increment _id, at this point the _id column in SQL Server database ends up being NULL?
How to pass additional new_id, which is set to auto increment to the SQL Server database? Writes to mongo but in the whole project is UNDEFINED.
Mongo id and SQL id are different by default, usually in SQL default id is integer type. You need to create id in SQL which is string type (primary key called "id", type string) to be able to copy id from mongo to SQL.
It would be great for you to provide some code for more detailed answer.
I think the issue is that the Mongo ObjectID is binary, so converting it to a string just shows the textual representation of the binary. Instead of sql.NVarChar (255) can you use sql.Binary (12)? See this post from much smarter people: Has anyone found an efficient way to store BSON ObjectId values in an SQL database?

VB .NET SqlBulkCopy - DataTable DateTime type conversion to SQL

I'm trying to use SqlBulkCopy.WriteToServer to insert data in a certain table of an SQL database. To do so, I have a DataTable which is populated with records I need to save on the database. Basically my code is based on the Microsoft's example provided for the method SqlBulkCopy.WriteToServer.
The problem is that I have two DateTime fields in the SQL table, and i don't know how to represent them while defining the columns of the DataTable. I tryed both System.String and System.DateTime, but after the code has been executed, he says he can not convert a String type to DateTime. DataTable columns are defined in the following way (code taken from the example linked above):
Dim productID As DataColumn = New DataColumn()
productID.DataType = System.Type.GetType("System.Int32")
How can i do that? What is the correct type to use for a DataTable's column corresponding to an SQL DateTime field?
Previously, I used an SQL command to map every field, for example:
' Fields initialization
SqlCmd.Parameters.Add("#Field1", SqlDbType.DateTime)
[...]
SqlCmd.Parameters.Add("#FieldN", SqlDbType.NChar, 255)
' After opened the transaction
SqlCmd.Parameters("#Field1").Value = MyDateTimeSavedInAString
[...]
SqlCmd.Parameters("#FieldN").Value = "NTHVALUE"
Thanks in advance.
[UPDATE1] The DateTime column now works, but the same kind of error is given by another column which will be saved on a time field in the SQL Server's table. What kind of VB .NET type i should use to map the DataTable column with the one of SQL Server marked as time?
[UPDATE2] I'm trying to use an SQL table with every field set to nvarchar data type, but it still gives the same error. In fact he says that it is impossible to convert the String type of the origin column in the nvarchar type of the destination column.
Use DateTime - DateTime conversion works like a charm. SqlBulkCopy does not want a lot of modifications in the data - it bypasses most of SQL Server's processing for raw performance.
And you can avoid using a DataTable - it takes about an hour or two to write your own object wrapper ;) DataTables are not exactly efficient.
And try to wrap it up more- SqlBUlkCopy is terrible code in that it puts an exclusive lock on the target table. I have my own wrapper creating a temporary table, bulk copying into this and then using a simple SELECT INTO to move the data to the final table in a short atomic operation.
And be aware - below around 1000 lines it makes no sense to use SqlBulkCopy. High overhead. Rather create a long multi line insert statement.

Convert varchar to a double in DataSet TableAdapter Fill

In SQL Server 2008, I have a strongly typed data set with a table:
TABLE
ID (Guid)
Value (varchar(50))
This this table, Value actually represents an encrypted value on the database, which becomes decrypted after reading from this table on my server.
In Visual Studio; I have a Dataset with my table, which looks like:
TABLE
ID (Guid)
Value (float)
I want to know if there is a way, in a DataSet, to call my decryption methods on Value when I am calling my Fill Query on the TableAdapter for this Table.
Is there anyway to extend the DataSet XSD to support this sort of data massaging when reading data?
In addition to this, is there a way when inserting/updating records in this table to write strings to encrypted values?
NOTE:
All Encrypt/Decryption code is being performed on the client to the database, not on the database itself.
The Fill() method is going to execute whatever SQL is in the SelectCommand property of the DataAdapter. It's certainly possible to customize the SQL to "massage" data as it comes in.
Your issue is made more complex by the need to execute some .NET decryption. If you really want to do this and it is of high value to you, you could install a .NET assembly in the SQL Server database. Once this was done, you should be able to specify a custom SelectCommand that calls the code in your ,NET assembly to decrypt the data at select-time.
But that seems like an awful lot of work for very little reward. It's probably easier and more efficient to simply post-process the dataset and decrypt there. :)

SQL Server Conversion Exception: Error converting data type varchar to numeric

I am working on a project where the goal is to convert an existing application to SQL Server. However, I am running into issues with ID generation - specifically, the conversion of data types.
The hibernate annotations for my ID column are as follows:
#Id
#GeneratedValue(generator="ID_GEN", strategy=GenerationType.TABLE)
#TableGenerator(name="ID_GEN", table="[$SSMA_seq_SEQ_EXAMPLE_ID]")
#Column(name="ID", unique="true", nullable="false")
public String getId() {
return this.id;
}
This ID column of this table maps to a varchar(50), while the $SSMA_seq_SEQ_EXAMPLE_ID maps to a table with a single id column with a data type of numeric(38,0).
When I attempt an insert (by creating a Java object and persisting it), I get the following exception: com.microsoft.sqlserver.jdbc.SQLServerException: Error converting data type varchar to numeric.
Conceptually, it makes sense that a numeric(38,0) would fit into a varchar(50), but it seems that the SQL Server implicit conversion does not work in this case. Unfortunately, changing the database definition is not an option at this point in time.
Are there any global settings that will force this conversion in either hibernate or SQL Server? Because I am using hibernate to generate the IDs, I do not have much control over the SQL that is generated to grab an ID before committing this object to the database.
I ended up writing a custom Generator that assumed the known data types and did the conversions. See this for full details.

Resources