SQL 2016 Always Encrypted columns and SQL temporary tables (#temp) - sql-server

We are looking for a solution to implement "always encrypted" columns in a database, where we are using at the same time many SQL temporary tables (#tmp).
We explored the alternate path - stop using #temp tables, but this would mean a high impact on our app in terms of time/cost.
Did anyone find a way to write queries like "insert into #tmp select from my_table", where my_table contains AE columns?
I tried applying the same CMK and CEK to the tempdb database, so that I can create the same structure for the #tmp table, as the structure of my_table.
This doesn't solve the problem though - having the tables in 2 different databases seems to prevent the data transfer.
I'm looking for an SQL solution, and not for a solution which involves a client app (C#, vb, etc.) which has access to all the encryption keys.

Insert operations in the manner you are describing are not supported for encrypted columns.
"insert into #tmp select from my_table"
You will have to write a client app to achieve a similar result. If you want to explore that path, please leave a comment and I can guide you.
You should be able to achieve something similar in C# as follows.
Do select * from encryptedTable to load the data in a SqlDataReader then use SqlBulkCopy to load it to the temp table using SqlBulkCopy.WriteToServer(IDataReader) Method
If you have the encrypted table and the plaintext table on the same SQL Server instance, then be aware that you might to leaking information to SQL Server admin, because they can examine the plaintext data and corresponding ciphertext

Related

Import CSV into SQL Server database, keeping ID column values

I am working to migrate a SQLite database to SQL Server and I need to use IntelliJ IDEA to import all the data from the SQLite tables in to the MSSQL database.
I have exported the data to CSV format, but when I import into SQL Server, I need to maintain the existing ID columns (as foreign keys refer to it).
Normally, I can do this by executing SET IDENTITY_INSERT xxx ON; prior to my INSERT statements.
However, I do not know how to do this when importing CSV using IntelliJ.
The only other option I see is to export the data as a series of SQL INSERT statements, but that is very time consuming as the schemas between the two databases are slightly different (not to mention the SQL syntax).
Is there another way to import this data?
I don't know how to perform an Identity Insert ON in an IntelliJ query, but I do know how to work around this problem. Import your data into a temporary table destination, then execute a query within SQL Server that
Sets Identity Insert ON
Inserts the data from the temporary table into the final destination
Sets Identity Insert OFF
What this really does is prevent you from having to spend (potentially) hours finding out how to implement an Identity Insert ON in IntelliJ when you may never need to do this again. It is straightforward and simple to code as well.
However, if you want to learn if there is a way to do this in IntelliJ, go for it. That would be a more optimal method.

preserve the data while dropping a hive internal table

I have loaded a huge table from SQL Server onto Hive. The mistake I made is I created the table as a Internal table in HIVE. Can anyone suggest any hack so that I can alter the table structure , without dropping the data.
The data is huge and I cant afford to export the data out of source again.
The problem right now, is that since the column orders don't match the SQL server table, a lot of columns display NULL.
Any help will be highly appreciated.
I do not see any problem to use an Alter Table on a internal table. (https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-AlterTable/Partition/Column)
Another - but not recommended - option would be to open your hive metastore(HCatalog) and apply the changes there. Hive reads out the schema information from a relational database (configured during the Hadoop setup, default is MySQL). In this MySQL you can try to change some settings. However, this is not recommended as with a mistake, you can screw your whole Hive databases.
The safest way is creating a new table and using the existing as a source
create table new_table
as
select
[...]
from existing_table

How to Copy/Consolidate data from different tables hosted on different MS SQL Servers and save them into one Table on another MS SQL Server

I am a newbie in SQL so please bear with me. I am hoping you can help/guide me. I have a table on 5 MS SQL Servers that have identical Columns and I want to consolidate the data into a separate table/separate MS SQL Server.
the challenge is that I only have "Read Only Permission" from the source table (5 MS SQL Servers) but I have permission to create a table on the destination MS SQL Server DB.
another challenge is I wan to truncate or extract parts of the txt in one column of the source table and save them into different columns on the destination table.
Next challenge is for the destination table to query once a day the source table for any update.
See screenshot by clicking either of the URL.
Screenshot URL1
Screenshot URL2
Appreciate it very much if you can help/guide me. Many thanks in advance.
You'll need to setup a linked server and use either an SSIS package to pull the data into the form you need, or OPENROWSET/OPENQUERY queries with an insert on the server you do have write privileges.
Either pre-create a table to put the new data in, or if not needed build up a temporary table or the insert the data into a table variable.
To concat a field to a new field use something like the examples below:
SELECT (field1 + field 2) as Newfield
or
SELECT (SUBSTRING(field1, 2,2) + SUBSTRING(field2, 3,1)) as Newfield
Finally you should setup all this an agent Job scheduled to your needs.
Apologies if this is not as detailed as you like, but it seems there are many questions to be answered and not enough detail to help further.
Alternatively you could also do a lookup upon lookup (USING SSIS):
data flow task > download first table completely to destination server
JOIN TO
dataflow task > reading from destination server, do a lookup to 2 origin server (if match you might update, if not, insert)
repeat until all 5 of them are done.
This is NOT the most elegant or efficient solution, but it will definitely get the work done.

Access Upsize to Sqlserver not transfer data

I have a Access Database with 11,000,000 records.I want to transfer this records to same table in Sqlserver 2008 using Upsize tools. This tool creates the database and tables correctly but the table in SQL Server is empty and data is not transferred.
Since you didn't mention receiving an error message, check the field types in the new SQL Server table to confirm they are compatible with their Access counterparts.
If it looks OK, start Access and create an ODBC link to the SQL Server table. Then create an Access "append query" to add data from the Access table to the SQL Server table.
INSERT INTO remote_table (field1, field2, field3)
SELECT field1, field2, field3
FROM local_table
WHERE date_field >= #2012-01-01# AND date_field < #2012-02-01#;
Note I imagined a WHERE clause which limits the number of rows to a reasonably small subset of the 11 million rows. Adjust as needed for your situation.
If that INSERT succeeds, repeat it with different WHERE conditions to append chunks of the data to SQL Server until you get it all transferred.
And if it fails, hopefully you will get an error message which explains why.
As noted here in most cases it is a bad date or simply a date that is outside of SQL server that cases a fail. I would suggest you use the Access migration tool as opposed to the built in tool. It does a MUCH better job.
You find this utility here:
http://www.microsoft.com/en-us/download/details.aspx?id=28763
The above tends to deal with the date and other issues that prevent data uploads far better than the built in upsize tool

Use SSIS to migrate and normalize database

We have an MS Access database that we want to migrate to a SQL Server Database with a new DB design. A part of the application that uses the SQL Server DB is already written.
I looked around to find out how to do the migration step most easily and started with Microsofts SQL Server Integration Services (SSIS). Now I have gotten to the point that I want to split a table vertically for normalization reasons.
A made up example looks like this
MS Access table person
ID
Name
Street
SQL Server table person
id
name
SQL Server table address
id
person_id
street
How can I complete this task best with SSIS? The id columns are identity (autoincrement) columns, so I cannot insert the old ID. How can I put the correct person_id foreign key in the address table?
There might even be a table which has to be broken up into three tables, where a row in table2 belongs to table1 and a row in table3 belongs to a row table2.
Is SSIS the appropriate means for this?
EDIT
Although this is a one-time migration, we need to have an automated and repeatable process, because the production database is under heavy usage and we are working on the migration in our development environment with recent, but not up-to-date data. We plan for one test run of the migration and have the customer review the behaviour. If everything is fine, we will go for the real migration.
Most of the given solutions include lots of manual steps and are thus not appropriate.
Use the execute SQL Task and write the statement yourself.
For the parent table do Select into table from table... then do the same for the rest as you progress. Make sure you set identity insert to ON for the parent table and reuse your old ID's. That will help you keep your data integrity.
For migrating your Access tables into SQL Server, use SSMA, not the Upsizing Wizard from Access.
You'll get a lot more tools at your disposal.
You can then break up your tables one by one from within SQL Server.
I'm not sure if there are any tools that can help you split your tables automatically, at least I couldn't find any, but it's not too difficult to do manually although how much work is required depends on how you used the original tables in your VBA code and forms in the first place.
A side note
Regarding normalization, don't go overboard with it: I know your example was just that but normalizing customer addresses is not always (rarely?) needed.
How many addresses can a person have?
If you count a home address, business address, delivery address, billing address, that's probably the most you'll ever need.
In that case, it's better to just keep them in the same table. Normalizing that data will just require more work to recombine and offers no benefit.
Of course, there are cases where it would make sense to normalise but I've seen people going overboard with the notion (I've been guilty of it as well) and then find themselves struggling to build more complex queries to join all that split data, making development and maintenance harder and often suffering a performance penalty in the process.
Access is so user-friendly, why not normalize your tables in Access, and then upsize the finished structure from there?
I found a different solution which was not mentioned yet and allows us to use all the comfort and options of the dataflow task:
If the destination database is on a local SQL Server, you can use a dataflow task with SQL Server destination instead of an OLE DB destination.
For a SQL Server destination you can mark the "keep identities" option. (I do not know if the english names are correct, because we have a german version.) With this you can write into identity columns
We found that we cannot use the old primary keys everywhere, because we have some tables that take a union of records from multiple tables.
We start the process by building a temporary mapping table with columns
new_id (identity)
old_id (int)
old_tablename (string)
We first fill in all the old_id s for every table that is referenced by a foreign key in the new schema. The new_id values are generated automatically by SQL Server.
So we can use a join to translate from old_id to new_id where needed. We use the new_id values to fill the identity (primary key) columns in the new tables with the "keep identities" option and can simply look them up in our mapping table for the foreign keys by a join.
You might also look at Jamie Thomson's SSIS Normalizer component. I just found out about it today (haven't actually tried it yet). The example he posts looks a lot like the one in your question.

Resources