Restore deleted information from active SQL Server table - sql-server

I have a database that was storing customer information (dbo.custinfo). The telephone numbers in the table were removed in the past for security purposes by replacing with Null values. Now it has been decided that the phone numbers need to be put back in.
There have been other customers added to this database since then. What is the best way to restore this data while making sure the correct numbers correspond to the correct customers?
I have the backup of the db that was done before removing the phone numbers.
Thank you in advance

While in accordance with Larnu comment the side by side restore of the database is required, this operation can be accomplished by SSMS and GUI.
However, there is no wizard for building update statement that uses joins, so perhaps it is kind of advanced case for someone who are not specialized in SQL.
So, a generic snippet to get values back:
-- Another backup to rollback this change
BACKUP DATABASE yourDB TO DISK= 'C:\Somewhere\YourDB.bak' WITH COPY_ONLY, COMPRESSION, STATS=1
-- UPDATE + JOIN
UPDATE cur_ci
SET cur_ci.phone = res_ci.phone
FROM [CURRENTDB].dbo.custinfo cur_ci
JOIN [RESTOREDB].dbo.custinfo res_ci on cur_ci.customerID = res_ci.customerID

Related

SQL Server trigger in database a should input row in database b with different user

We have a system that stores tens of thousands of databases, each holding the data of a customer subscribing to our services. Each of those databases has their own user that is known and used by the custom thorough the distributed, thick, application front-ends we provide.
Now I would like to add a trigger to one of the tables in each one of those dbs, that should update one common db with some of the data from the inserted rows. Sort of a many-to-one db scenario if you will...
This new common db can be set up pretty much how we like - as far as users/permissions and so on, and the trigger we insert in the old dbs can be written pretty freely. But we can not change those customer dbs as far as users/permissions.
Currently I'm experimenting with using the guest user, it has been given writing access, on the common db (named "foo" in example below) like this, but it is not working (i think due to the guest user of the common db not being allowed to access the customer db table - named Bf in example below - it is triggered in?). It may also be that I'm using "execute as" where I should be using execute as login="? I'm having a hard time finding a comprehensible place that describes the difference.
This is the trigger we would like to get working, and inserted in every customer db:
ALTER TRIGGER [dbo].[trgAfterInsert] ON [dbo].[Bf]
WITH EXECUTE AS 'guest'
FOR INSERT
AS
begin
insert into [foo].[dbo].[baar]
(publicKey, fileId)
SELECT b.publicKey, a.autoId
FROM client_db_1.[dbo].[Integrationer] as b, inserted as a where
a.autoId in (select top 1 autoId from inserted order by autoId)
end
As you may guess, I'm not an experienced user of triggers or sql permissions/access work. But the info we want to collect is harmless, and it should take close to 0 time to execute, and in a secured, non-exposed, environment, so I'm very willing to read/learn if anyone has advice?
/Henry

How can I refer to a database deployed from the same DACPAC file, but with a different db name?

Background
I have a multi-tenant scenario and a unique Sql Server project that will be deployed into multiple databases instances on the same server. There will be one db for each tenant, plus one "model" db.
The "model" database serves three purposes:
Force some "system" data to be always present in each tenant database
Serves as an access point for users with a special permission to edit system data (which will be punctually synced to all tenants)
When creating a new tenant, the database will be copied and attached with a new name representing the tenant
There are triggers that checks if the modified / deleted data within tenant db corresponds to "system" data inside the "model" db. If it does, an error is raised saying that system data cannot be altered.
Issue
So here's a part of the trigger that checks if deletion can be allowed:
IF DB_NAME() <> 'ModelTenant' AND EXISTS
(
SELECT
[deleted].*
FROM
[deleted]
INNER JOIN [---MODEL DB NAME??? ---].[MySchema].[MyTable] [ModelTable]
ON [deleted].[Guid] = [ModelTable].[Guid]
)
BEGIN;
THROW 50000, 'The DELETE operation on table MyTable cannot be performed. At least one targeted record is reserved by the system and cannot be removed.', 1
END
I can't seem to find what should take the place of --- MODEL DB NAME??? --- in the above code that would allow the project to compile properly. When refering to a completely different project I know what to do: use a reference to the project that's represented with an SQLCMD variable. But in this scenario the project reference is essentially the same project; only on a different database. I can't seem to be able to add a self-reference in this manner.
What can I do? Does SSDT offers some kind of support for such a scenario?
Have you tried setting up a Database Variable? You can read under "Reference aware statements" here. You could then say:
SELECT * FROM [$(MyModelDb)][MySchema].[MyTable] [ModelTable]
If you don't have a specific project for $(MyModelDb) you can choose the option to "suppress errors by unresolved references...". It's been forever since I've used SSDT projects, but I think that should work.
TIP: If you need to reference 1-table 100-times, you may find it better to create a SYNONM that uses the database variable, then point to the SYNONM in your SPROCs/TRIGGERs. Why? Because that way you don't need to deploy your SPROCs/TRIGGERs to get the variable replaced with the actual value and that can make development smoother.
I'm not quite sure if SSDT is particularly well-suited to projects of any decent amount of complexity. I can think of one or two ways to most likely accomplish this (especially depending on exactly how you do the publishing / deployment), but I think you would actually lose more than you gain. What I mean by that is: you could add steps to get this to work (i.e. win the battle), but you would be creating a more complex system in order to get SSDT to publish a system that is more complex (and slower) than it needs to be (i.e. lose the war).
Before worrying about SSDT, let's look at why you need/want SSDT to do this in the first place. You have system data intermixed with tenant data, and you need to validate UPDATE and DELETE operations to ensure that the system data does not get modified, and the only way to identify data that is "system" data is by matching it to a home-of-record -- ModelDB -- based on GUID PKs.
That theory on identifying what data belongs to the "system" and not to a tenant is your main problem, not SSDT. You are definitely on the right track for a multi-tentant system by having the "model" database, but using it for data validation is a poor design choice: on top of the performance degradation already incurred from using GUIDs as PKs, you are further slowing down all of these UPDATE and DELETE operations by funneling them through a single point of contention since all client DBS need to check this common source.
You would be far better off to include a BIT field in each of these tables that mixes system and tenant data, denoting whether the row was "system" or not. Just look at the system catalog views within SQL Server:
sys.objects has an is_ms_shipped column
sys.assemblies went the other direction and has an is_user_defined column.
So, if you were to add an [IsSystemData] BIT NOT NULL column to these tables, your Trigger logic would become:
IF DB_NAME() <> N'ModelTenant' AND EXISTS
(
SELECT del.*
FROM [deleted] del
WHERE del.[IsSystemData] = 1
)
BEGIN
;THROW 50000, 'The DELETE operation on table MyTable cannot be performed. At least one targeted record is reserved by the system and cannot be removed.', 1;
END;
Benefits:
No more SSDT issue (at least for from this part ;-)
Faster UPDATE and DELETE operations
Less contention on the shared resource (i.e. ModelDB)
Less code complexity
As an alternative to referencing another database project, you can produce a dacpac, then reference the dacpac as a database reference in "same server, different database" mode.

SQL 2008 All records in column in table updated to NULL

About 5 times a year one of our most critical tables has a specific column where all the values are replaced with NULL. We have run log explorers against this and we cannot see any login/hostname populated with the update, we can just see that the records were changed. We have searched all of our sprocs, functions, etc. for any update statement that touches this table on all databases on our server. The table does have a foreign key constraint on this column. It is an integer value that is established during an update, but the update is identity key specific. There is also an index on this field. Any suggestions on what could be causing this outside of a t-sql update statement?
I would start by denying any client side dynamic SQL if at all possible. It is much easier to audit stored procedures to make sure they execute the correct sql including a proper where clause. Unless your sql server is terribly broken, they only way data is updated is because of the sql you are running against it.
All stored procs, scripts, etc. should be audited before being allowed to run.
If you don't have the mojo to enforce no dynamic client sql, add application logging that captures each client sql before it is executed. Personally, I would have the logging routine throw an exception (after logging it) when a where clause is missing, but at a minimum, you should be able to figure out where data gets blown out next time by reviewing the log. Make sure your log captures enough information that you can trace it back to the exact source. Assign a unique "name" to each possible dynamic sql statement executed, e.g., each assign a 3 char code to each program, and then number each possible call 1..nn in your program so you can tell which call blew up your data at "abc123" as well as the exact sql that was defective.
ADDED COMMENT
Thought of this later. You might be able to add / modify the update trigger on the sql table to look at the number of rows update prevent the update if the number of rows exceeds a threshhold that makes sense for your. So, did a little searching and found someone wrote an article on this already as in this snippet
CREATE TRIGGER [Purchasing].[uPreventWholeUpdate]
ON [Purchasing].[VendorContact]
FOR UPDATE AS
BEGIN
DECLARE #Count int
SET #Count = ##ROWCOUNT;
IF #Count >= (SELECT SUM(row_count)
FROM sys.dm_db_partition_stats
WHERE OBJECT_ID = OBJECT_ID('Purchasing.VendorContact' )
AND index_id = 1)
BEGIN
RAISERROR('Cannot update all rows',16,1)
ROLLBACK TRANSACTION
RETURN;
END
END
Though this is not really the right fix, if you log this appropriately, I bet you can figure out what tried to screw up your data and fix it.
Best of luck
Transaction log explorer should be able to see who executed command, when, and how specifically command looks like.
Which log explorer do you use? If you are using ApexSQL Log you need to enable connection monitor feature in order to capture additional login details.
This might be like using a sledgehammer to drive in a thumb tack, but have you considered using SQL Server Auditing (provided you are using SQL Server Enterprise 2008 or greater)?

If my database user is read only, why do I need to worry about sql injection?

Can they (malicious users) describe tables and get vital information? What about if I lock down the user to specific tables? I'm not saying I want sql injection, but I wonder about old code we have that is susceptible but the db user is locked down. Thank you.
EDIT: I understand what you are saying but if I have no response.write for the other data, how can they see it. The bringing to crawl and dos make sense, so do the others but how would they actually see the data?
Someone could inject SQL to cause an authorization check to return the equivalent of true instead of false to get access to things that should be off-limits.
Or they could inject a join of a catalog table to itself 20 or 30 times to bring database performance to a crawl.
Or they could call a stored procedure that runs as a different database user that does modify data.
'); SELECT * FROM Users
Yes, you should lock them down to only the data (tables/views) they should actually be able to see, especially if it's publicly facing.
Only if you don't mind arbitrary users reading the entire database. For example, here's a simple, injectable login sequence:
select * from UserTable where userID = 'txtUserName.Text' and password = 'txtPassword.Text'
if(RowCount > 0) {
// Logged in
}
I just have to log in with any username and password ' or 1 = 1 to log in as that user.
Be very careful. I am assuming that you have removed drop table, alter table, create table, and truncate table, right?
Basically, with good SQL Injection, you should be able to change anything that is dependent on the database. This could be authorization, permissions, access to external systems, ...
Do you ever write data to disk that was retrieved from the database? In that case, they could upload an executable like perl and a perl file and then execute them to gain better access to your box.
You can also determine what the data is by leveraging a situation where a specific return value is expected. I.e. if the SQL returns true, execution continues, if not, execution stops. Then, you can use a binary search in your SQL. select count(*) where user_password > 'H'; If the count is > 0 it continues. Now, you can find the exact plain text password without requiring it to ever be printed on the screen.
Also, if your application is not hardened against SQL errors, there might be a case where they can inject an error in the SQL or in the SQL of the result and have the result display on the screen during the error handler. The first SQL statement collects a nice list of usernames and passwords. The second statement tries to leverage them in a SQL condition for which they are not appropriate. If the SQL statement is displayed in this error condition, ...
Jacob
I read this question and answers because I was in the process of creating a SQL tutorial website with a readonly user that would allow end users to run any SQL.
Obviously this is risky and I made several mistakes. Here is what I learnt in the first 24 hours (yes most of this is covered by other answers but this information is more actionable).
Do not allow access to your user table or system tables:
Postgres:
REVOKE ALL ON SCHEMA PG_CATALOG, PUBLIC, INFORMATION_SCHEMA FROM PUBLIC
Ensure your readonly user only has access to the tables you need in
the schema you want:
Postgres:
GRANT USAGE ON SCHEMA X TO READ_ONLY_USER;
GRANT SELECT ON ALL TABLES IN SCHEMA X TO READ_ONLY_USER
Configure your database to drop long running queries
Postgres:
Set statement_timeout in the PG config file
/etc/postgresql/(version)/main/postgresql.conf
Consider putting the sensitive information inside its own Schema
Postgres:
GRANT USAGE ON SCHEMA MY_SCHEMA TO READ_ONLY_USER;
GRANT SELECT ON ALL TABLES IN SCHEMA MY_SCHEMA TO READ_ONLY_USER;
ALTER USER READ_ONLY_USER SET SEARCH_PATH TO MY_SCHEMA;
Take care to lock down any stored procedures and ensure they can not be run by the read only user
Edit: Note by completely removing access to system tables you no longer allow the user to make calls like cast(). So you may want to run this again to allow access:
GRANT USAGE ON SCHEMA PG_CATALOG to READ_ONLY_USER;
Yes, continue to worry about SQL injection. Malicious SQL statements are not just about writes.
Imagine as well if there were Linked Servers or the query was written to access cross-db resources. i.e.
SELECT * from someServer.somePayrollDB.dbo.EmployeeSalary;
There was an Oracle bug that allowed you to crash the instance by calling a public (but undocumented) method with bad parameters.

Need help in SQL Rollback transaction

I have updated my database with this query:
UPDATE suppliers SET contactname = 'Dirk Lucky'
So all the rows in the suppliers table have the contact name: "Dirk Lucky."
How can I rollback this transaction? I want to restore my the contactname column in the suppliers table.
Thanks,
Programmer
Sounds like your transaction has already been committed. You can't roll it back anymore.
Your only option is restoring a backup. You might be able to restore a backup as a new database, so you can copy only the contactnames and not lose any other changes.
If you have the full log in FULL mode, you can do a restore from the log. There's an excellent blog post about it here.
If you don't, I seriously hope that you have a backup.
For future reference: When I do updates, I use the following syntax:
SELECT *
--UPDATE a SET col = 'val'
FROM myTable a
WHERE id = 1234
That way, you can see what you're selecting to update first. Then, when you're finally ready to update, you just select from UPDATE down and run the query. I've caught myself many times with this trick. It also works with deletes, so that's a bonus.
Hopefully your database is using the Full Recovery Model. If so:
First you need to take a transaction log backup now.
You can then look to perform a database restore from your FULL backup + Differntials.
Once done, you can then restore the transaction log to a specific point in time prior to your update statement.
See "How to Restore to a point in time"
As other posters have suggested, you could restore to another database, and apply an update back to the live database should you wish to minimise downtime.
Hope this makes sense but let me know if you need assistance.

Resources