I plan on updating some table names by create a synonym of the old name and renaming the table to what I want it to be. Can replication properly reference a synonym?
Also as a side question, is there an easy way to see if a specific table is actually being replicated? (via a query perhaps)
I don't think so. Replication works by reading the log and there are no log records generated for a synonym. As to your question about finding out which tables are replicated, a query on sysarticles in the table should get you where you want to go. HTH.
Related
First of all: i don't need a full-text-search engine, i don't need full-text-search in my code. I have a database with ~2000 tables, and i need to find the table and column in which certain information is stored, for developing purposes. Is there any quick way (maybe an SQL Server Management Studio trick that i should know of) to do this? I think phpmyadmin provides such a feature for mysql dbs. At the moment i'm seriously thinking of dumping the database to an .sql file and use a text editor to search for the phrases i'm looking for.
Check the INFORMATION_SCHEMA. You can select on it - there is a table containing all the field names etc. and you can then do search on that one.
I don't see a way how to do it without dynamic SQL - get list of all tables and their columns from sys.tables and sys.columns (don't forget to add proper schema if you're using them), construct query that checks for the values you're trying to find and stores table and column name in temporary table, place all queries into (temp) table and finally cursor/loop over that table executing all queries.
PS. your idea of dumping everything into *.sql files should work as well, depends on the volume of data.
I was wondering what the best approach would be to restoring a single record from an MDF file (generated as backup on the live instance) into the live SQL Server database.
I know about the process of attaching the file to the database and have read quite a bit about completely restoring, but how about selecting a single record from one of the tables and inserting it back into the same table on the live instance?
I could always create the new record from scratch myself based on the resulting row from the select statement, but I am sure that there has got to be a smarter and cleaner approach to such a simple task.
Thanks a bunch in advance, looking forward to your answers.
Cheers.
You cannot simply read a record out of an MDF file, you need to attach it or restore it to a database.
Natively, you can't. However, Red Gate has a product called Virtual Restore that allows you to mount a database from a backup.
Is this for right now or for future planning? If the latter, then you can utilize database snapshots.
Depending on what kind of flexibility you have on the live server, you could always just attach the backup database under a different name on the live or another linked server and then just select the record you want straight in to the equivalent table in the live database.
How viable this is depends entirely on the primary key. If it is an auto-generated identity column, selecting it in will give a different primary key which may have undesirable results on any linked records you may also want to add, the new primary key would have to be taken in to account.
Example of query
insert into originaldb.dbo.Persons
select * from backupdb.dbo.Persons where PersonId = '654G'
originaldb.dbo.Persons is the original table that you want to select into.
backupdb.dbo.Persons is your restored backup table.
You'll need to modify this query a little if you are not selecting the entire row but that is the gist of it.
One thing I want to do is build a personal database for myself at home to use a financial database (transaction log, checking/savings account tables, etc), and I want to do this mainly to learn more about developing databases. I am pretty familiar with MS Access, though not put to use in this context, but what I am really trying to learn is SQL Server.
SO, that being said, the first question that popped into my mind is that if I have a transactions table that I would want to use as a ledger, then is there some method to have the table automatically perform a calculation for one field (balance) based on another field(s) (expense, revenue fields)? Similar to what someone may do with Excel......
Or is this something I would have to do with an unbound form, and an UPDATE statement kinda of approach? If a table constraint exists for this type of idea, I would like to learn it....
I mentioned MS Access in the title, but a SQL Server is also most appreciated. Thanks for the help!
Derived data should not be stored except if it needs to be indexed -- you calculate the values in your SQL statements, or in the presentation layer.
In addition to computed columns in SQL Server tables, you can have them in VIEWS and you can index them. The term is "indexed view" and when you do that, the data is persisted in a hidden temp table and updated on the fly when the data the VIEW is derived from is changed. You can read about it under the TYPES OF VIEWS topic in the same link cited in #Roland Bouman's answer.
Last of all, it's not clear to me why you mention Access at all if you're using SQL Server as your back end. Are you developing your front end in Access?
In MS SQL server, you can use computed columns for this: http://msdn.microsoft.com/en-us/library/ms191250.aspx
We have an MS Access database that we want to migrate to a SQL Server Database with a new DB design. A part of the application that uses the SQL Server DB is already written.
I looked around to find out how to do the migration step most easily and started with Microsofts SQL Server Integration Services (SSIS). Now I have gotten to the point that I want to split a table vertically for normalization reasons.
A made up example looks like this
MS Access table person
ID
Name
Street
SQL Server table person
id
name
SQL Server table address
id
person_id
street
How can I complete this task best with SSIS? The id columns are identity (autoincrement) columns, so I cannot insert the old ID. How can I put the correct person_id foreign key in the address table?
There might even be a table which has to be broken up into three tables, where a row in table2 belongs to table1 and a row in table3 belongs to a row table2.
Is SSIS the appropriate means for this?
EDIT
Although this is a one-time migration, we need to have an automated and repeatable process, because the production database is under heavy usage and we are working on the migration in our development environment with recent, but not up-to-date data. We plan for one test run of the migration and have the customer review the behaviour. If everything is fine, we will go for the real migration.
Most of the given solutions include lots of manual steps and are thus not appropriate.
Use the execute SQL Task and write the statement yourself.
For the parent table do Select into table from table... then do the same for the rest as you progress. Make sure you set identity insert to ON for the parent table and reuse your old ID's. That will help you keep your data integrity.
For migrating your Access tables into SQL Server, use SSMA, not the Upsizing Wizard from Access.
You'll get a lot more tools at your disposal.
You can then break up your tables one by one from within SQL Server.
I'm not sure if there are any tools that can help you split your tables automatically, at least I couldn't find any, but it's not too difficult to do manually although how much work is required depends on how you used the original tables in your VBA code and forms in the first place.
A side note
Regarding normalization, don't go overboard with it: I know your example was just that but normalizing customer addresses is not always (rarely?) needed.
How many addresses can a person have?
If you count a home address, business address, delivery address, billing address, that's probably the most you'll ever need.
In that case, it's better to just keep them in the same table. Normalizing that data will just require more work to recombine and offers no benefit.
Of course, there are cases where it would make sense to normalise but I've seen people going overboard with the notion (I've been guilty of it as well) and then find themselves struggling to build more complex queries to join all that split data, making development and maintenance harder and often suffering a performance penalty in the process.
Access is so user-friendly, why not normalize your tables in Access, and then upsize the finished structure from there?
I found a different solution which was not mentioned yet and allows us to use all the comfort and options of the dataflow task:
If the destination database is on a local SQL Server, you can use a dataflow task with SQL Server destination instead of an OLE DB destination.
For a SQL Server destination you can mark the "keep identities" option. (I do not know if the english names are correct, because we have a german version.) With this you can write into identity columns
We found that we cannot use the old primary keys everywhere, because we have some tables that take a union of records from multiple tables.
We start the process by building a temporary mapping table with columns
new_id (identity)
old_id (int)
old_tablename (string)
We first fill in all the old_id s for every table that is referenced by a foreign key in the new schema. The new_id values are generated automatically by SQL Server.
So we can use a join to translate from old_id to new_id where needed. We use the new_id values to fill the identity (primary key) columns in the new tables with the "keep identities" option and can simply look them up in our mapping table for the foreign keys by a join.
You might also look at Jamie Thomson's SSIS Normalizer component. I just found out about it today (haven't actually tried it yet). The example he posts looks a lot like the one in your question.
I have a problem with a Database at my work. There is currently auditing in place, but its clunky, requires a lot of maintence, and it falls short in a few regards. So I am replacing it.
I want to do this in as generic of a way as possible and have designed the tables, and how everything will link and be updated.
Now, thats all fine and good, but I want to be able to write a generic way to insert records into these audit tables. (Without having to enter a command for each column in each table being changed.)
Is there anyway within a Stored Procedure to iterate over all the columns in a table? And I would like to write this in such a way that it will work with several tables, and automatically pickup and audit added columns and such.
Any ideas?
EDIT: Guess I should Clarify. I will be auditing data that is in the tables. But I will be using the same table(s) to store the audited data for every table in the database.
And I can not use Triggers because usually, when an update occurs, it occurs across multiple tables, but I would like all of these updates to be part of a single Change Set.
That is not a problem, because I can do all the Updates from within a single Stored Proc. I would just prefer some way like a loop, that i can get all the updated fields, figure out which ones changed, and the insert those changed ones into the audit table.
And I would like to do this without have a long list of if statements and insert statements for each column. (By doing this in a generic loop, it will handle added columns automatically and not be bothered by deleted columns)
By "added columns" I guess you are looking to audit DDL. If you use SQL 2005, then you want this link.
If don't use SQL 2005, then you probably want to either use one of the many SQL schema comparison tools, like SQL Red Gates tool set probably has something in there.
If you don't have $ for tools, then you might just want to run periodic queries against information_schema.tables and information_schema.columns. By periodically capturing these in permenant tables, you can identify when they have gained or lost rows (and hence a schema changed occured)
If you are doing data audit instead, then you'll want want to code generate some triggers, again using information_schema.tables and information_schema.columns.
There would be performance considerations, but you could add insert and update triggers to all of your tables, and have the triggers insert into your audit tables.
Use DDL Triggers (assuming you have SQL Server 2005+)!
http://www.sqlteam.com/article/using-ddl-triggers-in-sql-server-2005-to-capture-schema-changes
http://technet.microsoft.com/en-us/library/ms189871.aspx
That could be done if you were using a data access layer that could trap which tables and columns are being update and generating the insert statements for the audit table. In a stored procedure? Which stored procedure? Do you have a single one that does updates? Or are you creating one per table?
If it's an option for you, just upgrade to sql server 2008 and turn on Change Data Capture.