Converting Views into tables using SSIS - sql-server

Is it a good idea to convert complex views in "db1" into tables in "db2" using SSIS.
the purpose of converting views to tables is to make the reports faster.
Is there any disadvantages or risks?

You would do much better to make your view schema-bound and add indexes as appropriate. Doing this will actually cause SQL Server to make a concrete copy of the view on your server.
For more information, search for "SCHEMABINDING" in this MSDN link.

Related

Long running view in ssas-tabular

I have a SQL Server database where we have created some views based on dim and fact tables. I need to build SSAS tabular model based on my tables and views. But one of the view runs for 1.5 hour inside SQL query (SSMS). Now I need to use this same view to build my SSAS tabular model but 1.5 hour is not acceptable. This view is made up of more than 10 table joins and lot of Where conditions.
1) Can I bring all these tables being used in this view inside my SSAS tabular model but then I am not sure how to join them all and use where clauses inside SSSAS and build something similar to my view. Is that possible? If yes how?
or
2) I will build one time SSAS model from that view and then if I want to incrementally load the data daily, whats is the best way to do that?
The best option is to set up a proper ETL process. That is:
Extract the tables from your source SQL database into a new SQL database that you control.
Transform the data into a star schema.
Load the data from the star schema into SSAS.
On SQL Server, the most common approach is use SSIS packages for data extraction, movement, and orchestration, and SQL Server Agent Jobs for scheduling.
To answer your questions:
Yes, it is certainly possible to bring in all of the tables directly from your source system into your tabular model, but please don't do this! You will only create problems for yourself later on when creating DAX calculations. More information here.
Incrementally loading data is something you decide for each table that is imported into your tabular model. Again, this is much easier if you have a proper star schema, as you would typically run a full processing on all your dimension tables, and then do incremental processing only on the largest fact tables.

MS SQL Server: central database and foreign keys

I'm am currently developing one project of many to come which will be using its own database and also data from a central database.
Example:
the database "accountancy" with all accountancy package specific tables.
the database "personelladministration" with its specific tables
But we also use data which is general and will be used in all projects like "countries", "cities", ...
So we have put these tables in a separate database called "general"
We come from a db2 environment where we could create foreign keys between databases.
However, we are switching to MS SQL server where it is not possible to put foreign keys between databases.
I have seen that a workaround would be to use triggers, but I'm not convinced that is a clean solution.
Are we doing something wrong in our setup? Because it seems right to me to put tables with general data in a separate database instead of having a table "countries" in every database, that seams difficult to maintain and inefficiënt.
What could be a good approach to overcome this?
I would say that countries is not a terrible table to reproduce in multiple databases. I would rather duplicate static data like that than use more elaborate techniques. There is one physical schema per database in sql server and the schema can not be shared. That is why people use replication or triggers for shared data.
I can across this problem a while back. We have one database for authentication, however, those users have to be shared across multiple applications some of which have their own database.
Here is my question on this topic.
We resorted to replication and using an custom Authentication/Registration service agent to keep the data up to data.
Using views, in what Sourav_Agasti suggested in his answer, would be the most straight forward approach for static data. You can create views and indexed views and join data from databases on linked servers.
Create a loopback linked server and then create a view(if required, on each database) which accesses the table in this "central database" through this linked server. There will be a minor performance impact but it more than enough compensates by being very simiplistic.

SqlServer: How to get meta-data about tables and their relationships?

I was wondering if there was a way (relatively simple I hope) to get information about the table and its attributes and realtionships?
Clarification: I want to grab all tables in the database and get the meta-model for the whole database, tables, column data, indicies, unique constraints, relationships between tables etc.
The system has a data dictionary in sys.tables, sys.columns, sys.indexes and various other tables. You can query these tables to get metadata about the database structure. This posting has a script I wrote a few years ago to reverse engineer a database schema. If you take a look at it you can see some examples of how to use the system data dictionary tables.
there are a whole bunch of system views in the information_schema schema in sql server 2005+. is there anything in particular you're wanting?
some of those views include:
check_contraints,
columns,
tables,
views
Try sp_help <tablename>. This will show you foreign key refrences and data about the columns, etc - that is, if you are interested in a specific table, as your question seemed to indicate.
If using .NET code is an option SMO is the best way to do it.
It abstracts away all these system views and tables hiding them behind nice and easy to use classes and collections.
http://msdn.microsoft.com/en-us/library/ms162169.aspx
This is the same infrastructure SQL Server Management Studio uses itself. It even supports scripting.
Abstraction comes at a cost though so you need maximum performance you'd still have to use system views and custom SQL.
You can try to use this library db-meta

Linking tables between databases

I’m after a bit of advice on the best way to go about this is SQL server 2008R2 express. I have a number of applications that are in separate databases on the same server. They are all “plugins” that use a central staff/structure list that will be in a separate database. The application is in the process of being migrated from JET.
What I’m looking for is the best way of all the “plugin” databases being able to see the central database and use those tables in standard queries and views etc.
As I’m using express that rules out any replication solution and so far the only option I can think of is to use triggers or a stored procedure to “push” out all the changes to the plugins. The information needs to be populated on a near enough real time basis however the number of changes will be very small maybe up to 100 a day and the biggest table only has about 1000 rows at the moment (the staff names table).
Hopefully that will cover all everything but if anyone needs any more details then just ask
Thanks
Apologies if I've misunderstood, but from your description it sounds like all these databases are hosted on the same instance of SQL Server - it's your mention of replication that makes me uncertain.
Assuming that's the case, you should be able to replace any copies of tables from the central database which are held in the "plugin" databases with views or synonyms which reference the central tables directly, since SQL server allows you to make references between databases on the same server using three-part naming (database_name.schema_name.object_name)
For example, if each plugin db has a table StaffNames, you could replace this with a view by dropping the table, then creating a view:
drop table StaffNames
go
create view StaffNames
as
select * from <centraldbname>.<schema - probably dbo>.StaffNames
go
and your code should continue to work seamlessly, as long as permissions are set up.
Alternatively, you could replace all the references to the shared tables in the plugin databases with three-part name references to the central database, but the view method requires less work.

Which is faster for SSIS, a View on a table or a Conditional Split?

I have an SSIS project where one of the steps involves populating a SQL Server table from an Oracle Table.
The Oracle table has a column ssis_control_flag. I want to pull across all records that have this field set to 'T'.
Now, I was wondering which would be the best way of doing this, and the two options as I have detailed in the question presented themselves.
So really, I am wondering which would be faster/better. Should I create a conditional split in the SSIS package that filters off all the records I want? Or should I create a view in Oracle that selects the records based on the criteria, and utilise that view as the data source in SSIS?
Or is there an even better way of doing this? You help would be much appreciated!
Thanks
Why don't you use a WHERE clause to filter the records, instead of creating a view? May be I am not getting your question correctly.
Then in general, bringing all the data to SSIS and then filtering out is not recommended. Especially when you can do the filtering at the source DB end itself. Consider the network bandwidth costs as well.
Then this particular filter that you are talking about here, cannot be done with a better efficiency in SSIS than that can be done at DB. Hence better do it in the Oracle DB itself.
You can use a query using openrowset as the source for the dataflow instead of directly accessing the Oracle table.

Resources