I have an odd question that needs a creative answer.
I have code a program that writes sensitive data on a SQL Server Table with 3 columns.
Every time this program starts I need to check this data.
The problem is that I need a way to check that the data on this 3 fields was writed by my process and not manipulated nor copied externally or by other process.
So if the data on any of the 3 fields was modiffied externally, my code should not recognize the data. Also, if the data was copied from other server, neither should be recornized.
What I have in mind:
1) Create a secret private key with unique data from SQL Instance.
2) Create a binary field on the table.
3) When the data is witted, fill the binary field with PwdEncrypt function and private key as data.
4) When data is readed, use PwdCompare to check if the data on binary field match private key.
Now, how can I ensure that other fields are not modified?
I need this to work on several servers that use from SQL Server Express 2008 R2 to SQL Server Standard 2016
Thanks!
Your approach is pretty much correct, but you don't need anything asymmetric here, a simple HMAC will do.
When modifying any row of a table that requires data authenticity, concatenate the binary values of all fields that you want authenticated and run the final binary string through an HMAC with a secret key stored only with your process.
Do the same again when checking to see if the row is valid and compare the two resulting hashes using a time-safe check. If they don't match, something has been tampered with.
Related
My project currently has a database which contains several tables, the most important of which has one binary column with very large entries (representing serialized C# objects). There are a large number of entries in the production database, and when debugging, it is often necessary to pull these entries down into the local development database (as remote debugging does not seem to work, which is a separate issue).
If I attempt to compare the local and production databases on this table with all columns, the comparison can take up to an hour, or eventually time out, but this has worked in the past and allowed me to download the entries and debug them successfully. If I compare on all table columns but the binary data column, the comparison is almost instantaneous, but that column is not then transferred to the production database.
My question is: is there any way to run a data comparison between two tables, excluding a particular column for the comparison itself (other fields give enough information to differentiate without it) but including it when updating the target database?
You could use a hash function on your large varbinary fields and compare those. HASHBYTES with MD5 is a good method for comparing as it's astronomically unlikely to generate the same hash value for two different inputs. Problem is, HASHBYTES only works on fields up to 8000 bytes. There are some work arounds though by creating a function. A few posted here:
SQL Server 2008 and HashBytes
You would have the option of storing the hash values in your table at the time of insert or update by using a persisted calculated fields. Or you could just generate the hash values while doing your comparison query.
In SQL Server 2008, I have a strongly typed data set with a table:
TABLE
ID (Guid)
Value (varchar(50))
This this table, Value actually represents an encrypted value on the database, which becomes decrypted after reading from this table on my server.
In Visual Studio; I have a Dataset with my table, which looks like:
TABLE
ID (Guid)
Value (float)
I want to know if there is a way, in a DataSet, to call my decryption methods on Value when I am calling my Fill Query on the TableAdapter for this Table.
Is there anyway to extend the DataSet XSD to support this sort of data massaging when reading data?
In addition to this, is there a way when inserting/updating records in this table to write strings to encrypted values?
NOTE:
All Encrypt/Decryption code is being performed on the client to the database, not on the database itself.
The Fill() method is going to execute whatever SQL is in the SelectCommand property of the DataAdapter. It's certainly possible to customize the SQL to "massage" data as it comes in.
Your issue is made more complex by the need to execute some .NET decryption. If you really want to do this and it is of high value to you, you could install a .NET assembly in the SQL Server database. Once this was done, you should be able to specify a custom SelectCommand that calls the code in your ,NET assembly to decrypt the data at select-time.
But that seems like an awful lot of work for very little reward. It's probably easier and more efficient to simply post-process the dataset and decrypt there. :)
We have a program in which each user is given their own Access database. We'd like to merge these all together into a single SQL Server database.
The problem is that, using the SQL Server import/export wizard, the primary/foreign keys do not get updated. So for instance if one user has this table:
1 Apple
2 Banana
and another user has this:
1 Coconut
2 Cheeseburger
the resulting table looks like this:
1 Apple
2 Banana
1 Coconut
2 Cheeseburger
Similarly, anything that referenced Banana by its primary key (2) is now referencing both Banana and Cheeseburger, which will not make the vegans very happy.
Is there any way to automatically update the primary/foreign key references when importing, other than writing an extremely long and complex import-script?
If you need to keep them fully compartmentalized, you have to assign some kind of partitioning column to each table. Is there a reason you need your SQL Server to have the same referential integrity as Access? Are you just importing to SQL Server for read-only reporting? In that case, I would not bother with RI. The queries will all require a partitionid/siteid/customerid. You could enforce that for single-entity access by wrapping tables with a table-valued UDF which required the partitionid. For cross-site that doesn't work.
If you are just loading to SQL Server for reporting, I would also consider altering the data model to support reporting (i.e. a dimensional model is sometimes better than a normalized model) instead of worrying about transaction processing.
I think we need to know more about the underlying goals.
Need more information of requirements.
My basic question is 'Do you need to preserve the original record key?' e.g. 1:apple in table T of user-database A; 1:coconut in table T of user-database B. Table T is assumed to have the same structure in all database instances. Reasons I can suppose that you may want to preserve the original data: (a) you may have a requirement to the reference the original data (maybe a visual for previous reporting), and/or (b) there may be a data dependency in the application itself.
If the answer is 'no,' then you are probably interested only in preserving all of the distinct data values. Allow the SQL table to build using a new key and constrain the SQL table field such that it contains unique data. This approach seems to preserve the original table structure (but not the original key value or its 'location') and may suffice to meet your requirement.
If the answer is 'yes,' I do not see a way around creating an index that preserves a pointer to the original database and the key that was created in its table T. This approach would seem to require an application modification.
The best approach in this case is probably to split the incoming data into two tables: one to identify the database and original key, another to identify the distinct data values. For example: (database) table D has records such as 'A:1:a,' 'A:2:b,' 'B:1:c,' 'B:2:d,' 'B:15:a,' 'C:8:a'; (data) table T1 has records such as 'a:apple,' 'b:banana,' 'c:coconut,' 'd:cheeseburger' where 'A' describes the original database 'location,' 1 is the original value in location 'A,' and 'a' is a value that equates records in table D and table T1. (Otherwise you have a lot of redundant data in the one table; e.g. A:1:apple, B:15:apple, C:8:apple.) Also, T1 has a structure similar to the original T and is seems to be more directly useful in the application.
Ended up creating an SSIS project for this. SSIS is a visual programming tool made by Microsoft (and part of their "Business Integration Studio", which comes with SQL Server) designed for solving exactly these sorts of problems.
Why not let Access use its replication manager to merge the databases? This will allow you to identify the conflicts and resolve them before importing to SQL Server. I'm fairly confident it will retain the foreign key relationships. If I understand your situation correctly, and the databases are the same structure with different data, you could load the combined database to the application and verify the data before moving to SQL Server.
What version of Access are you using? Here's a link for Access 2000. Use the language to adjust search parameters to fit your version.
http://technet.microsoft.com/en-us/library/cc751054.aspx
Background: I am a software tester working with a test case management database that stores data using the deprecated image data type. I am relatively inexperienced with SQL Server.
The problem: Character data with rich text formatting is stored as an image data type. Currently the only way to see this data in a human readable format is through the test case management tool itself which I am in the process of replacing. I know that there is no direct way to convert an image data type to character, but clearly there is some way this can be accomplished, given that the test case management software is performing that task. I have searched this site and have not found any hits. I have also not yet found any solutions by searching the net.
Objective: My goal is to export the data out of the SQL Server database into an Access database There are fewer than 10,000 rows in the database. At a later stage in the project, the Access database will be upsized to SQL Server.
Request: Can someone please give me a method for converting the image data type to a character format.
.
You presumably want to convert to byte data rather than character. This post at my blog
Save and Restore Files/Images to SQL Server Database might be useful. It contains code for exporting to a byte array and to a file. The entire C# project is downloadable as a zip file.
One solution (for human readability) is to pull it out in chunks that you convert from binary to character data. If every byte is valid ASCII, there shouldn't be a problem (although legacy data is often not what you expect).
First, create a table like this:
create table Nums(
n int primary key
);
and insert the integers from 0 up to at least (maximum image column length in bytes)/8000. Then the following query (untested, so think it through) should get your data out in a relatively useful form. Be sure whatever client you're pulling it to won't truncate strings at smaller than 8000 bytes. (You can do smaller chunks if you want to be opening the result in Notepad or something.)
SELECT
yourTable.keycolumn,
Nums.n as chunkPosition,
CAST(SUBSTRING(imageCol,n*8000+1,8000) AS VARCHAR(8000)) as chunk
FROM yourTable
JOIN Nums
ON Nums.n <= (DATALENGTH(yourTable.imageCol)-1)/8000
ORDER BY yourTable.keycolumn, Nums.n
I have a database that will be hosted by a third party. I need to encrypt strings in certain columns, but I do not want to loose the ability to query over the encrypted columns.
I have limited control over the SQL instance (I have control over the database I own, but not to any administrative functions.)
I realize that I can use a .net encryption library to encrypt the data before it is inserted into the table, but I would then loose the ability to query the data with sql.
I like using SQL Server's key management: http://technet.microsoft.com/en-us/library/bb895340.aspx . After you have a key setup then its really easy to use:
To insert records you do this:
insert into PatientTable values ('Pamela','Doc1',
encryptByKey(Key_GUID('secret'),'111-11-1111'),
encryptByKey(Key_GUID('secret'),'Migraine'))
To select the record back out its really simple:
select Id, name, Docname
from PatientTable where SSN=encryptByKey(Key_GUID('secret'),SSN)
The cipher text will always be the same so it is much more efficient to compare the cipher text's instead of going though and decrypting each one.
if you use the same encryption key you could encrypt your search query string and match against that. Say my password is runrun i encrypt it to ZAXCXCATXCATXCA then when i want to search for a user with password runrun encrypt it first and it will match the table entry.
AFAIK, Most RDBMS do not support this, what I usually see is either:
A) The DB query API encrypts the data with a key that only the local server knows before it is sent to the remote db and decrypts when it's received.
or
B) The remote database stores everything encrypted with a key that it knows (probably at run time, given physically by an admin, or it's given the key with the query).
A will let you use the database without letting the owners know what's being stored, but you wont be able to do queries on the actual encrypted data other than maybe equality. B only protects against physical server theft (server has to be off though or they can get the key from memory...).
What I assume you want is called Private Information Retrieval. It's a fairly young field, I don't think you're going to find a decent implementation at the moment.
You could generate a hash (such as Md5 ) and store the hash value in the db. When you query you can select * from [my table] where value = {md5 hash}