SSIS Logging in SQL 2012 - sql-server

In the 2008R2 version I was using SSIS logging to a sysssislog table in a defined database. 2012 brings now the concept of Integration Services Catalogs that have their own SSISDB db. Is it still necessary to use the logging to sysssisslog tables or is the information that ends there now somewhere in SSIS DB (what i would expect, since all the reporting for SSIS execution is based on this db as well).

The logging you are familiar with remains available to you with the 2012 release of SQL Server. That said, database logging no longer has to be explicitly defined in your package if you are using the Project Deployment model (SSISDB).
Out of the box, you'll get Basic logging level when you run a package. The other options are none, performance and verbsose. Read more about how to set these and other execution parameters via MVP Phil Brammer. Matt Masson of the actual SSIS team points out what events those levels correspond to in his post on What Events are Included in the SSIS Catalog Log Levels.
Finally, SSIS Reporting Pack is an open source project from MVP Jamie Thomson that provides different insight into the basic data being captured in the new integration services catalog.
My thoughts: necessary no. But if you already have a framework built out culling data from that log (we use it for an alerting system), you are supported to keep using it. If you run integration services packages from multiple servers, there is no functionality to combine the logging from all those disparate SSISDB catalogs to provide insight into your entire universe. You can get that if you all the packages log to a centralized server using the classic technique.

Related

Determining when a SSIS 2008 package was deployed on a server

I want to find when an SSIS 2008 package was deployed under MSDB in an instance of SQL Server. In the table dbo.sysssispackages, I can see package creation date but where can I find the last modified/deployed date of a package?
The date an SSIS package was deployed to the MSDB is not tracked so you do not have the ability to know when a package was deployed, who performed this feat, etc.
With SQL Server 2012+ and the project deployment model, the SSISDB supports the ability to track when a project was deployed and by whom.
The best answer I have for you is much the same as Tab has just posted except I tied mine to VerBuild, which is a monotonically increasing number that VS updates whenever you save a package.
If it's absolutely crucial that you have this information, you could look at modifying msdb.dbo.sp_ssis_putpackage. That's definitely off the reservation so buyer beware, etc but depending on your appetite for risk, you could either extend dbo.sysssispackages by adding your custom columns there or create a new table dbo.sysssispackages_extended and there record who did what and when.
This information is not stored, and is not available for retrieval from SQL Server.
The best way to make this information available that I have found is to use the Version-related fields (VersionMajor, VersionMinor, VersionComments) in the SSIS package. Combined with use of source control, you can see which version of your package is currently live on your server, and find that version in source control to find which version of the code it is.

Quickly changing SSIS-packages data source parameters for easy migration

I would need to migrate a SQL database from Sybase to MS SQL Server. Before doing the actual migration on the production server I first created an SSIS-package with SQL Server Management Studio's Import/Export Wizard for testing with other databases. The test migration was successful and I would now like to deploy my SSIS-package to the real servers.
However, it seems I cannot simply run the package in Management Studio choosing different data sources for it - it only runs on the same databases for which it was created. Now, it can be edited in something called SQL Server Business Intelligence Development Studio (or BIDS for short)(I am using the SQL Server 2008 version), but going through every data flow task changing the destination source manually for each of the ~ 150 tables I am moving is ineffective and also introduces a possibility for error.
I there a way to quickly change what data source is to be used for ALL destination sources in ALL the flow tasks of an SSIS-package? If not, what simple method is there for testing migration with test databases first and simply changing the data sources when deploying?
I am using ODBC data sources, but for some the package shows OLE-sources in BIDS instead.
I hope I was clear enough. If you have additional questions, please ask! Thank you!
I would use a variable for the ConnectionString property of the connection manager. A package level configuration can be very useful for accomplishing this task. Several ways to do this. I prefer to use a table in SQL Server that holds all the configurations for all packages. This can be especially effective if you have multiple packages and need to dynamically change a set of connection managers across those multiple packages.
The basic steps are:
Opposite click on your SSIS design surface and select "Package Configurations..."
Create a package level configuration of Configuration Type "SQL Server"
Store your connection in a Configuration table in SQL Server
Alter your Connection Manager to use a variable for the ConnectionString Property
Populate that variable from the Configuration table via your package level configuration
When it comes time to switch from Test to Production, simply update the connection string in your configuration table
These screenshots can help...
This is part of a larger package management framework that I implemented using this book:
Microsoft SQL Server 2008 Integration Services: Problem, Design, Solution
I highly recommend it. Should take less than a day to set it up. Book has step by step instructions.
This question and its associated answers also helpful.

Should SSIS Packages run on a database SQL Server or a separate app server?

It seems to be a policy in my company to place application code on a separate server than the database.
We are a data warehouse that mostly use SSIS and T-SQL to transform and load data into a SQL Server. We would like to use a C# .NET app to do some of the steps
For example, to iterate through a file and call a web service.
Given that SQL Server 2008 now automatically installs .NET 3.5 and supports .NET stored procedures, is it justified to prohibit .ETL code written in .NET from running on the database server? Having both SSIS and .NET code running on the same box will help simplify our workflow so we don't have to worry about a scheduling app having to control flow across servers.
I do understand that it would be appropriate, for example in a web app, to separate the business logic tier from the db tier.
One possible snag: In my company, the DBAs are not Admins of the App Servers and do not have rights to install the db Client tools to the app server and the App Serve admins probably should not have anything to do with installing database client tools. If they did, there would have to be coordination between the App server Admins and the DB server Admins. Perhaps the DBAs could share ownership of the App Server. How do companies usually handle this?
Best practice is to leave SQL Server completely alone on the box. SQL Server uses a user mode cooperative multi-tasking and resource control that assumes 100% ownership of the systemn and does not play well if other processes are stealing memory or CPU. See Dynamic Memory Management and SQL Server Batch or Task Scheduling.
As for .NET doing ETL Web calls from inside SQLCLR: don't even think about it. NEver do any sort of blocking operations from SQLCLR, you'll starve the SQL scheduler.
As a general rule, you should have your SSIS server be separate from the SQL Server as SSIS is an app layer. Next, your application code should run on a separate server (can be the same as the SSIS server).
Keep your scaling concerns separate.
It depends on the code, the availability needed of the database, and the size of the box. If you're doing a lot in memory on the SSIS pipe or in your C# app, I'm a proponent of putting it on a separate box (all of the ETL, not just some of it). If you're just using SSIS to call stored procs on the database, it's fine to leave it on the same system.
That being said, I'd avoid splintering the ETL across boxes unless there's an overwhelming reason to do so. It adds a lot of complexity for not much benefit (usually).
That being said, if you need C# stuff to run, you could always use the script tasks in SSIS to control its execution.
We implemented SSIS by placing DTS packages on same server with SSIS Server and we have winservice that leaves on another server and executing remotely DTS Packages.

Distribute OLAP cubes as part of application setup

We currently have our custom application that is being distributed with our database (SQL 2005/2008). It is an easy task, before we release a new version we just pack our database into SQL initialization scripts (these create tables and populate data). We use SQL Management studio to generate these scripts.
As a next step we would like to deploy OLAP cube (along with ETL commands made with Integration Services) that would be used to analyze the data in the original database. .
We know to create and design a cube, but I do not even know how could be generalize all these packages and deploy them as a solution, script or something that our customers could install on their servers. Customers do not have a Visual studio and we need to create "something" in a wizard (with some input required from customer e.g. OLAP cube name, server etc) for them to deploy it.
How do you do that?
From Here:
Microsoft SQL Server 2005 Analysis
Services (SSAS) provides three tools
for deploying an Analysis Services
database onto an Analysis Services
server in the production environment:
Using an XML Script Use SQL Server Management Studio to generate an XML
script of the metadata of an existing
Analysis Services database, and then
run that script on another server to
recreate the initial database.
Using the Analysis Services Deployment Wizard Use the Analysis
Services Deployment Wizard to use the
XMLA output files generated by an
Analysis Services project to deploy
the project’s metadata to a
destination server.
Synchronizing Analysis Services Databases Use the Synchronize
Database Wizard to synchronize the
metadata and data between any two
Analysis Services databases.
In addition to using one of the
deployment tools, you can deploy
Analysis Services by using the backup
and restore functionality. For more
information, see Backing Up and
Restoring an Analysis Services
Database.
The Analysis Services Deployment Wizard can be found in your start menu under SQL 2005, Analysis Services, Deployment Wizard. This takes the asdatabase file in your bin directory and creates an XMLA script that creates the SSAS database.
Links:
Using the Analysis Services Deployment Wizard
Readme for Ascmd Command-line Utility Sample
Or alternatively, you can use a tool to build the Cubes and Schemas that provide a simple mechanism for deploying initial implementations and a smooth upgrade path.
As you know deployment, isn't just a case of implementing a database even an OLAP database in the target environment. There's also the ETL, and tables to consider, which also involves ensuring that at every step of the way you're creating table/SQL scripts, and all this is fine and dandy until you come to provide an upgrade to your product, and need to upgrade the SSIS/DW Relational Schema Tables and SSAS Cube structures.
What you find is MS is no help at all here. It's helpful for initial deployments, but doesn't provide much in the way of in situ upgrades.
This is a problem that we have faced up to and developed a tool to address, so that we're able to do the things that you are trying to do, but do them smoothly. Leaving our technicians to focus on building high quality Data Warehouses, rather than technologies to do mundane, annoying, fraught with danger but necessary things like "upgrades".
Check out http://www.dataacademy.com, this is the product we've developed to do successfully, just what you are trying to do. Drop me a mail, if you'd like to discuss further.
Cheers and the best of luck.

Transfer objects and data between SQL 2005 databases

I am wanting to transfer objects (tables, stored procedures, data etc) between two servers (Dev box and Live box) and was wondering what the best approach for doing this is?
In SQL Server 2000, you could transfer all objects and data between databases. Now all there is is 'copy data' and 'write a query'. Where has the second option gone?
Both databases are SQL 2005 (with service pack 2). When transferring, primary keys and relationships should be kept intact as well as all the views and other associated data with regards to ASP.NET authentication. Integration Services is not setup up on the live server, so that is not an option.
The only way I can think of is generating scripts, then running them on the other server, but that is more time consuming than the old way (this is how I am doing it now).
If you are willing to pay, I recommend Sql Compare and Sql Data Compare from Red Gate.
Very useful products.
Database Publishing Wizard
http://sqlhost.codeplex.com/
It's a shame you haven't got Integration Services installed as you could use the "Copy Database Wizard". I believe this creates an SSIS package that runs on the destination server.
If you have Visual Studio 2008, you could try the Data comparison and Schema comparison tools.
Your best bet is probably a schema & data comparison tool; there's various tools listed at http://www.mssqltips.com/tip.asp?tip=1069
You don't mention the scope of your application or the number of developers, etc., so it is a little hard to make any recommendations. However, if your development consists of multiple concurrent projects and multiple developers and you are copying from a Development to Production I would recommend something like the following:
implement 3 "areas": dev, qa, production.
develop all changes in dev, create all changes in scripts, use something like cvs or sourcesafe to track changes on all objects
when changes are ready and tested, run your scripts in qa, this will validate your scripts and install procedure
when ready run your scripts and install procedure on production
note: qa is almost identical to production, except applied changes waiting for their final production install. dev contains any work in progress changes, extra debug junk, etc. You can periodically restore a production backup onto qa and dev to resync them (just make sure all developers are aware of this and plan accordingly), because (depending on the number of developers) they (production vs. qa vs. dev) will start to incur more differences over time.

Resources