Is there any handy tool that can make updating tables easier? Usually I got an Excel file with the original value in one column and new value in another column. Then I write a formula in Excel to create the 'update' statement. Is there any way to simplify the updating task?
I believe the approach in SQL server 2000 and 2005 would be different, so could we discuss them both? Thanks.
In addition, these updates usually request by "non-programmer" (which means they don't understand SQL, so it may not feasible to let them do query), is there any tool that can let them update the table directly without having DBAs do this task? Also, that tool needs to limit the privilege to only modify certain tables. And better has a way rollback the change.
Create a DTS package that will import a csv file, make the updates and then archives the file. The user can drop the file in a specific folder designated for the task or this can be done by an ops person. Schedule the DTS to run every hour, day, etc.
In case your users would insist that they keep using Excel, you've got several different possibilities of getting the data transferred to SQL Server. My preferred one would be to use DTS/SSIS, as mentioned by buckbova.
However, another method is by using OPENROWSET(), which makes it possible to query your Excel file as if it was a table. I wrote a small article about it here: http://blog.hoegaerden.be/2010/03/29/retrieving-data-from-excel/
Another approach that hasn't been mentioned yet (I'm not a big fan of letting regular users edit data directly in the DB), any possibility of creating a small custom application for them?
There you go, a couple more possible solutions :-)
Valentino.
I think the best approach is to expose a view on your data accessible to users who are allowed to do updates, and set up triggers on the view to perform the actual updates on the underlying data. Restrict change to only the columns they should be changing.
This technique can work on SQL Server 2000 and 2005.
I would add audit triggers on the underlying tables so you can always track changes.
You'll have complete control, and they can connect to it with Access or whatever and perform their maintenance.
You could create some accounts in SQL Server for these users and limit their access to only certain tables and columns along with onlu select / update / insert privileges. Then you could create an access database with linked tables to these.
Related
We intend to create DACPAC files using SQL database projects and distribute them automatically to several environments, DEV/QA/PROD, using Azure Pipeline. I can make changes to the schema for a table, view, function, or procedure, but I'm not sure how we can update specific data in a table. I am sure this is very common use case but unfortunately I am having hard time implementing it.
Any idea how can I automate creating/updating/deleting a row for a table?
E.g.: update myTable set myColumn = 5 where someColumn = 'condition'
In your database project you can add a Post Deployment Script
Do not. Seriously. I found DACPAC always to be WAY too limiting for serious operations. Look how the SQL is generated and - realize how little control you have.
The standard approach is to have deployment scripts that you generate and that do the changes in the database, plus a table in the db tracking which have executed (possibly with a checksum so you do not need t change the name to update them).
You can easily generate them partially by schema compare (and then generate the change script), but those also allow you to do things like data scrubbing and multi step transformations that DACPAC by design cannot efficiently and easily do.
There are plenty of frameworks for this around. They generally belong in the category of developer tools.
. I have two databases in same azure sql server .i want that both database interact to each other using trigger. i.e If any record is inserted in Customer table of first database the trigger gets fired and record is inserted in another database.
We had / have the same problem with triggers that we use for insert-update-delete where we write a record to Database-1 that has the primary table, but also updates Database-2 where we hold "archive" versions of the tables.
The only solution we have identified and are testing is to bring all of the tables into a single database and separate the different tables under separate database schemas in the one database.
Analysis so far of this approach looks promising.
I think what you're trying to do is not allowed in Sql Azure. From my expertise what you are trying to do is a bad practice on-premise as well (think backups-restore and availability issue scenarios).
You should move the dependency in the application and have the application update both databases, as appropriate.
Anyway, if you want to continue with this approach please take a look over Elastic Query feature: https://learn.microsoft.com/en-in/azure/sql-database/sql-database-elastic-query-overview
Please let me know if I can help with something
I'm upgrading my SQL Server 2000 database to SQL Server 2008 R2. I want to make use of Change Data Capture feature. Im my existing application I have the similar functionality, but I'm using triggers and historical table with Hst_ prefix with almost similar schema as the original tables.
My question is: is there any way to migrate my data from Hst_ tables to the tables used by CDC feature?
I was thinking of doing that like this:
I have the table Cases.
I'm using my custom historization mechanism , so I also have also three triggers (on insert, update and delete) and a twin table Hst_Cases.
Now I'm enabling CDC on table Cases
CDC creates function, which returns historical data (fn_cdc_get_all_changes_dbo_Cases) and also a system table, which actually holds the data (cdc.dbo_Cases_CT).
I could insert data from Hst_Cases to cdc.dbo_Cases_CT, but I have the following problems:
I don't know how to get __$start_lsn and __$seqval.
It is difficult to figure out __$update_mask (I have to compare each two rows).
Is there the only way to do that? I want to avoid the situation then I join "new" historical data with the "old" historical data from Hst_ tables.
Thanks!
You typically don't want to use the capture tables to store long-term change data, it would be better to have an SSIS package move the capture data to permananent tables. If you do use them, I think if you ever have to restore your database, they'll be empty after restore unless you use the KEEP_CDC option when restoring. You'll also need to disable the job that automatically purges the capture tables.
If you create your own tables for storage, you can omit the lsn and mask fields.
I need to create in my DB a log, that every action in the program should be written there.
I will also want to store additional data to it for example have the table and row the action was applied to.
In other words I want the log to be dynamic and should be able to refer to the other tables in the database.
The problem is, I don't know how to relate all the tables to this log.
Any ideas?
You have two choices here:
1) modify your program to add logging for every db access
2) add triggers to each table in your db to perform logging operations.
I don't recommend one logging table for all tables. You will have locking issues if you do that (every insert, update and delete in every table woudl have to hit this one, bad idea). Create a table for each table that you want to audit. There are lots of possible designs for the table, but they usually include some variant of old vlaue, new value, date changed, and user who did the change.
Then create triggers on each table to log the changes.
I know SQL Server 2008 also has a systemic way to set up auditing, this would be easier to set up than manual auditing and might be enough to lure your company into using 2008.
We have an application that has 1000+ databases and 600+ sprocs. Each database represents a different client.
Problem: We need to move this to a single database while creating as little effect on the ui as possible, meaning dont change all the sproc signatures at 1 time.
The connection string currently sets the database attribute, a proposal is to move that to the user attribute. This attribute (using SYSTEM_USER) could be used to determine the site identifier which would be used on the where clause.
The above would not be final solution, but allows us to make changes to the sproc signature at a slow controlled pace. Once all are done we can correct the connstring and get some connection pooling.
Are there any limitation to the number of logins/users that we can have on sqlserver 2005/8. Or has anyone been down this path that could shed some light on a better option.
See my answer here
Ideas for Combining Thousand Databases into One Database
Sounds like you two are working the same project. YOu will need to change every proc before you can move to one datbase or each client will see the others' data.
As for the number of logins on SQL Server 2005 / 08 - I don't think anyone has ever run into a hard limit here. A few thousand will NOT be any problem at all.
What you could consider for this scenario might be one schema inside your single DB per customer, e.g. customer "Miller" has a "miller" schema, with its objects inside, and customer "Brown" will have a "brown" schema.
And contrary to what HLGEM just responded - no, customers won't see each others data, if you specify proper permissions - each customer (and its users) into its own schema only - should work just fine.
Marc
You might also consider setting a distinctive application name in the connection string rather than using a distinctive user, which you can get into your where clause using APP_NAME(). I'm sure that SQL Server won't have a problem with thousands of logins, but you may prefer not to have to create them.