SqlServer: How to get meta-data about tables and their relationships? - sql-server

I was wondering if there was a way (relatively simple I hope) to get information about the table and its attributes and realtionships?
Clarification: I want to grab all tables in the database and get the meta-model for the whole database, tables, column data, indicies, unique constraints, relationships between tables etc.

The system has a data dictionary in sys.tables, sys.columns, sys.indexes and various other tables. You can query these tables to get metadata about the database structure. This posting has a script I wrote a few years ago to reverse engineer a database schema. If you take a look at it you can see some examples of how to use the system data dictionary tables.

there are a whole bunch of system views in the information_schema schema in sql server 2005+. is there anything in particular you're wanting?
some of those views include:
check_contraints,
columns,
tables,
views

Try sp_help <tablename>. This will show you foreign key refrences and data about the columns, etc - that is, if you are interested in a specific table, as your question seemed to indicate.

If using .NET code is an option SMO is the best way to do it.
It abstracts away all these system views and tables hiding them behind nice and easy to use classes and collections.
http://msdn.microsoft.com/en-us/library/ms162169.aspx
This is the same infrastructure SQL Server Management Studio uses itself. It even supports scripting.
Abstraction comes at a cost though so you need maximum performance you'd still have to use system views and custom SQL.

You can try to use this library db-meta

Related

MS SQL Server: central database and foreign keys

I'm am currently developing one project of many to come which will be using its own database and also data from a central database.
Example:
the database "accountancy" with all accountancy package specific tables.
the database "personelladministration" with its specific tables
But we also use data which is general and will be used in all projects like "countries", "cities", ...
So we have put these tables in a separate database called "general"
We come from a db2 environment where we could create foreign keys between databases.
However, we are switching to MS SQL server where it is not possible to put foreign keys between databases.
I have seen that a workaround would be to use triggers, but I'm not convinced that is a clean solution.
Are we doing something wrong in our setup? Because it seems right to me to put tables with general data in a separate database instead of having a table "countries" in every database, that seams difficult to maintain and inefficiënt.
What could be a good approach to overcome this?
I would say that countries is not a terrible table to reproduce in multiple databases. I would rather duplicate static data like that than use more elaborate techniques. There is one physical schema per database in sql server and the schema can not be shared. That is why people use replication or triggers for shared data.
I can across this problem a while back. We have one database for authentication, however, those users have to be shared across multiple applications some of which have their own database.
Here is my question on this topic.
We resorted to replication and using an custom Authentication/Registration service agent to keep the data up to data.
Using views, in what Sourav_Agasti suggested in his answer, would be the most straight forward approach for static data. You can create views and indexed views and join data from databases on linked servers.
Create a loopback linked server and then create a view(if required, on each database) which accesses the table in this "central database" through this linked server. There will be a minor performance impact but it more than enough compensates by being very simiplistic.

Better practice for SQL? One database for shared resources or tables in each Database with those resources

I have shared resources across all of my databases. Users, Companies etc. These are shared between all of my databases and the tables are the same. I want to create on Database for these tables and have all of my databases reference this one instead of having multiple tables that are the same. I come from a C# background and I am not very proficient in SQL. I am writing a new application that uses several of the databases we have.
Question: Should I make one database an authoritative source on these resources? The problem I see is I need Foreign Key relationships between databases and without triggers this is not possible. Not to mention when I write my linq statements I cannot query by these items.
We were able to achieve this by having one central database as the source of truth, then having copies of the applicable tables moved out to all the databases that needed it via triggers. You have to make sure all CRUD is done to the source of truth database, otherwise it gets very complicated to manage everything. You can then create the foreign keys to the copy tables.

Linking tables between databases

I’m after a bit of advice on the best way to go about this is SQL server 2008R2 express. I have a number of applications that are in separate databases on the same server. They are all “plugins” that use a central staff/structure list that will be in a separate database. The application is in the process of being migrated from JET.
What I’m looking for is the best way of all the “plugin” databases being able to see the central database and use those tables in standard queries and views etc.
As I’m using express that rules out any replication solution and so far the only option I can think of is to use triggers or a stored procedure to “push” out all the changes to the plugins. The information needs to be populated on a near enough real time basis however the number of changes will be very small maybe up to 100 a day and the biggest table only has about 1000 rows at the moment (the staff names table).
Hopefully that will cover all everything but if anyone needs any more details then just ask
Thanks
Apologies if I've misunderstood, but from your description it sounds like all these databases are hosted on the same instance of SQL Server - it's your mention of replication that makes me uncertain.
Assuming that's the case, you should be able to replace any copies of tables from the central database which are held in the "plugin" databases with views or synonyms which reference the central tables directly, since SQL server allows you to make references between databases on the same server using three-part naming (database_name.schema_name.object_name)
For example, if each plugin db has a table StaffNames, you could replace this with a view by dropping the table, then creating a view:
drop table StaffNames
go
create view StaffNames
as
select * from <centraldbname>.<schema - probably dbo>.StaffNames
go
and your code should continue to work seamlessly, as long as permissions are set up.
Alternatively, you could replace all the references to the shared tables in the plugin databases with three-part name references to the central database, but the view method requires less work.

Grouping ETL Staging Tables With User Schemas?

I was thinking of putting staging tables and stored procedures that update those tables into their own schema. Such that when importing data from SomeTable to the datawarehouse, I would run a Initial.StageSomeTable procedure which would insert the data into the Initial.SomeTable table. This way all the procs and tables dealing with the Initial staging are grouped together. Then I'd have a Validation schema for that stage of the ETL, etc.
This seems cleaner than trying to uniquely name all these very similar tables, since each table will have multiple instances of itself throughout the staging process.
Question: Is using a user schema to group tables/procs/views together an appropriate use of user schemas in MS SQL Server? Or are user schemas supposed to be used for security, such as grouping permissions together for objects?
This is actually a recommended practice. Take a look at the Microsoft Business Intelligence ETL Design Practices from the Project Real. You will find (download doc from the first link) that they use quite a few schemata to group and identify objects in the warehouse.
In addition to dbo and etl, they also use admin, audit, part, olap and a few more.
I think it's appropriate enough, it doesn't really matter, you could use another database if you liked which is actually what we do.
I'm not sure why you would want a validation schema though, what are you going to do there?
Both the reasons you list (purpose/intent, security) are valid reasons to use schemas. Once you start using them, you should always specify schema when referencing an object (although I'm lazy and never specify dbo).
One trick we use is to have the same-named table in each of several schemas, combined with table partitioning (available in SQL 2005 and up). Load the data in first schema, then when it's validated "swap" the partition into dbo--after swapping the dbo partition into a "dumpster" schema copy of the table. Net Production downtime is measured in seconds, and it's all carefully wrapped in a declared transaction.

Converting Views into tables using SSIS

Is it a good idea to convert complex views in "db1" into tables in "db2" using SSIS.
the purpose of converting views to tables is to make the reports faster.
Is there any disadvantages or risks?
You would do much better to make your view schema-bound and add indexes as appropriate. Doing this will actually cause SQL Server to make a concrete copy of the view on your server.
For more information, search for "SCHEMABINDING" in this MSDN link.

Resources