SqlBulkCopy with Always Encrypted column - always-encrypted

I am importing CSV through .Net code and getting the data in data table and trying to bulk insert using SqlBulkCopy but i am getting certificate error. when i try to do the other operation there is no certificate error its only when i perform SqlBulkCopy. Here is error description
Failed to decrypt a column encryption key using key store provider:
'MSSQL_CERTIFICATE_STORE'. The last 10 bytes of the encrypted column
encryption key are: '7F-1D-20-E1-43-0B-B5-92-66-78'. Certificate with
thumbprint 'XXXXXXXXXXXXXXXXXXXXXXXXXX' not found in certificate store
'My' in certificate location 'CurrentUser'. Verify the certificate
path in the column master key definition in the database is correct,
and the certificate has been imported correctly into the certificate
location/store. Parameter name: masterKeyPath
i did the certificate import in local machine. below is code
i have data in dt datatable object. that is returned from worksheet.Cells.ExportDataTableAsString.
Dim copy As New SqlBulkCopy(ConnString, SqlBulkCopyOptions.KeepIdentity Or SqlBulkCopyOptions.AllowEncryptedValueModifications)
copy.DestinationTableName = "Customer"
copy.ColumnMappings.Add("CustID", "CustID")
copy.ColumnMappings.Add("SSN", "SSN")
copy.WriteToServer(dt)
taking the reference - https://dba.stackexchange.com/questions/160577/is-it-possible-to-bulk-insert-data-into-a-table-that-has-columns-encrypted-with
if we go as per above we need to do the 2 round to insert the data.i have datatable object returned by Worksheet ExportDataTableAsString. when i directly use the datatable for SQLbulkcopy i get the the certificate missing error.
can someone help me on this and suggest better way to do it.

Right click on Certificate - All Tasks - Manage Private Keys - Add user
IIS AppPool[DefaultAppPool] worked for me.

Related

Updating/inserting into a table with Always Encrypted columns using EF Core 5

I'm having trouble using Entity Framework Core 5 with the "Always Encrypted" feature in a ASP.NET Core 5 API. I've configured an Azure Key Vault and updated the connection string as necessary. I can read encrypted column data successfully, with code like this:
await using var context = new RcContext();
Company c = await context.Companies.FindAsync(id);
where the Companies table has an encrypted column. The encrypted column is defined in the database as datatype varchar(16) and is returned as plain text in a string member of the entity.
However, trying to update a company or insert new companies using context.SaveChanges() is failing. I get the error
SqlException: Operand type clash: nvarchar(4000) encrypted with ... is incompatible with varchar(16) encrypted with ...
Some suggestions for solving this point to using SqlCommand from SqlClient or stored procedures, or increasing the column's size in the database to nvarchar(max).
Is EF Core not capable of using the normal SaveChanges() pattern to make updates to data in a SQL Server with Always Encrypted columns? How do I make this work with EF Core?
With Always Encrypted, the SQL Client needs to know the size of the columns so it can do the encryption on the client. So columns must be attributed like:
[Column(TypeName = "varchar(16)")] public string PaymentCreditCard { get; set; }
I only had to attribute the encrypted columns, not every column. Our source base had not used data annotations prior to this effort, and it wasn't clear to me that they are required for Always Encrypted to work.

Error in deploying SSAS cube to SQL Server Analysis

I am having issue deploying SSAS package to SQL Server Analysis. It is complaining of duplicates keys whereas the column is referencing is not a primary key column. I queried the dimension table to see that the primary keys have same values in the affected columns which is normal and possible. The attribute usage and type property are already set to regular in SSDT. Please find the error I am receiving below. I will appreciate an idea to fix this issue. Thank you.
Errors and Warnings from Response
Server: The current operation was cancelled because another operation
in the transaction failed. Errors in the OLAP storage engine: A
duplicate attribute key has been found when processing: Table:
'dwh_Dim_By_Answers', Column: 'QB_AnswerText', Value: 'hazard'. The
attribute is 'QB Answer Text'.
Their is two solutions for this issue :
to avoid key duplicate error when processing a dimension you just need to set the dimension property ProcessingGroup to ByAttribute instead of ByTable.
Force SSAS to ignore key duplicate error by setting KeyDupplicate to IgnoreError in dimension key errors tab. To achieve this go to SSMS OR SSVS -> process -> in process tab click change setings -> dimension key errors -> Use costume error configuration -> set KeyDupplicate to IgnoreError.
visit : https://www.mssqltips.com/sqlservertip/3476/sql-server-analysis-services-ssas-processing-error-configurations/

"Value cannot be null. Parameter name: reportedElement" when adding a new Always Encrypted column to an existing table

Using Visual Studio database projects (SSDT) I added a new column to an existing table. I am using Always Encrypted to encrypt individual columns. When I add the column and try to publish, I get a popup in Visual Studio that says "Value cannot be null. Parameter name: reportedElement".
If I don't encrypt the column, it works.
If I clear the existing data out of the table, it works.
But just trying to add a new nullable encrypted column does not publish. It will not even generate the script that would be applied.
I ran the daxFX and SSDT logging and viewed the logs with Windows Event Viewer, but I just see the same error "Value cannot be null. Parameter name: reportedElement".
This is what the added column definition looks like.
[MyNewColumn] INT ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY = [DefaultColumnEncryptionKey], ENCRYPTION_TYPE = DETERMINISTIC, ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256') NULL
I expect Visual Studio to publish successfully, adding my new nullable encrypted column but the actual behavior is a pop up that states "Value cannot be null. Parameter name: reportedElement".
I had the exact same issue, except I had decrypted the column to perform a lookup based on it that I couldn't while it was encrypted (this is a local development db).
The solution was to just perform the encryption manually via SSMS and then run the publish. I'm not sure why VS can't publish the changes, the encryption keys are stored in the local cert store and VS is running as admin but it might not be able to access the keys to encrypt the data but SSMS can.

Microsoft tech references for BULK INSERT may have some gaps ... can't get them to work

Huge edit -- I removed the ';' characters and replace them with 'GO' and ... the secondary key and URL worked, except I got this:
Cannot bulk load. The file "06May2013_usr_tmp_cinmachI.csv" does not exist or you don't have file access rights.
BTW, this can't be true :) -- I'm able to use PowerShell to upload the file so I'm sure it's not my account credentials. Here is the code I'm using now, again, it WON'T fit into a {} block no matter what I do with this editor, sorry for the inconvenience.
The docs can CREATE MASTER KEY is used to encrypt SECRET later on but there's no obvious link, assumed this is all under the hood -- is that right? If not, maybe that's what's causing my access error.
So, the issue with the data source not existing was errant syntax -- one can't use ';' evidently to terminate blocks of SQL but 'GO' will work.
The CSV file does exist:
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'S0me!nfo'
GO
CREATE DATABASE SCOPED CREDENTIAL AzureStorageCredential
WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
SECRET = 'removed'
GO
CREATE EXTERNAL DATA SOURCE myDataSource
WITH (TYPE = BLOB_STORAGE, LOCATION = 'https://dtstestcsv.blob.core.windows.net/sunsource', CREDENTIAL = AzureStorageCredential)
GO
BULK
INSERT dbo.ISSIVISFlatFile
FROM '06May2013_usr_tmp_cinmachI.csv'
WITH
(DATA_SOURCE = 'myDataSource', FORMAT = 'CSV')
I feel obliged to post at some info even if it's not a full answer.
I was getting this error:
Msg 4860, Level 16, State 1, Line 58
Cannot bulk load. The file "container/folder/file.txt" does not exist or you don't have file access rights.
I believe the problem might have been that I generated my SAS key from right now, but that is UTC time, meaning that here in Australia, the key only becomes valid in ten hours. So I generated a new key that started a month before and it worked.
The SAS (Shared Access Signature) is a big string that is created as follows:
In Azure portal, go to your storage account
Press Shared Access Signature
Fill in fields (make sure your start date is a few days prior, and you can leave Allowed IP addresses blank)
Press Generate SAS
Copy the string in the SAS Token field
Remove the leading ? before pasting it into your SQL script
Below is my full script with comments.
-- Target staging table
IF object_id('recycle.SampleFile') IS NULL
CREATE TABLE recycle.SampleFile
(
Col1 VARCHAR(MAX)
);
-- more info here
-- https://blogs.msdn.microsoft.com/sqlserverstorageengine/2017/02/23/loading-files-from-azure-blob-storage-into-azure-sql-database/
-- You can use this to conditionally create the master key
select * from sys.symmetric_keys where name like '%DatabaseMasterKey%'
-- Run once to create a database master key
-- Can't create credentials until a master key has been generated
-- Here, zzz is a password that you make up and store for later use
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'zzz';
-- Create a database credential object that can be reused for external access to Azure Blob
CREATE DATABASE SCOPED CREDENTIAL BlobTestAccount
WITH
-- Must be SHARED ACCESS SIGNATURE to access blob storage
IDENTITY= 'SHARED ACCESS SIGNATURE',
-- Generated from Shared Access Signature area in Storage account
-- Make sure the start date is at least a few days before
-- otherwise UTC can mess you up because it might not be valid yet
-- Don't include the ? or the endpoint. It starts with 'sv=', NOT '?' or 'https'
SECRET = 'sv=2016-05-31&zzzzzzzzzzzz'
-- Create the external data source
-- Note location starts with https. I've seen examples without this but that doesn't work
CREATE EXTERNAL DATA SOURCE BlobTest
WITH (
TYPE = BLOB_STORAGE,
LOCATION = 'https://yourstorageaccount.blob.core.windows.net',
CREDENTIAL= BlobTestAccount);
BULK INSERT recycle.SampleFile
FROM 'container/folder/file'
WITH ( DATA_SOURCE = 'BlobTest');
-- If you're fancy you can use these to work out if your things exist first
select * from sys.database_scoped_credentials
select * from sys.external_data_sources
DROP EXTERNAL DATA SOURCE BlobTest;
DROP DATABASE SCOPED CREDENTIAL BlobTestAccount;
One thing that this wont do that ADF does, is pick up a file based on wildcard.
That is: If I have a file called ABC_20170501_003.TXT, I need to explicitly list that in the bulk insert load script, whereas in ADF I can just specify ABC_20170501 and it automatically wildcards the rest
Unfortunately there is no (easy) way to enumerate files in blob storage from SQL Server. I eventually got around this by using Azure Automation to run a powershell script to enumerate the files and register them into a table that SQL Server could see. This seems complicated but actually Azure Automation is a very useful tool to learn and use, and it works very reliably
More opinions on ADF:
I couldn't find a way to pass the filename that I loaded (or other info) into the database.
Do not use ADF if you need data to be loaded in the order it appears in the file (i.e. as captured by an identity field). ADF will try and do things in parallel. In fact, my ADF did insert things in order for about a week (i.e. as recorded by the identity) then one day it just started inserting stuff out of order.
The timeslice concept is useful in limited circumstances (when you have cleanly delineated data in cleanly delineated files that you want to drop neatly into a table). In any other circumstances it is complicated, unwieldy and difficult to understand and use. In my experience real world data needs more complicated rules to work out and apply the correct merge keys.
I don't know the cost difference between importing files via ADF and files via BULK INSERT, but ADF is slow. I don't have to patience to hack through Azure blades to find metrics right now but your talking 5 minutes in ADF vs 5 seconds in Bulk Insert
UPDATE:
Try Azure Data Factory V2. It is vastly improved, and you are no longer bound to timeslices.

SQL Server BackEnd Access Front-End ODBC Error

I can read/write/update the table fine in SSMS, I can open/read/write the table fine if I open the table itself in Access 2013, but if I try to query the table, I get the generic access error message of
ODBC -- call failed
This table has 558,672 rows in it. I have tried using a DSNLess connection with VBA as well as a manually linking the table in through the toolbar in access. What is causing access to throw this error?
EDIT
I have also tried to compact and repair the database to no avail.
EDIT #2
It seems that only one element (a subform) is throwing the ODBC error. The peculiar thing is the main form is based on the same datasource that the sub form is, but only the subform is throwing an error?
I had this problem before here are the thing I had to change to access table with MS Access and edit it.
1.your tables should have a primary key. In the column properties, set identity specification to yes, and Identity increment by 1. I would prefer to set a completely new column with int data type.
2. No null values in boolean fields everything should be 1 or 0. and set a constraint to 0.

Resources