I've only been using SSIS briefly, but I find that my complaints are numerous. Here are my current issues:
In order for a package to store a password, you need to encrypt it. Even if the package is part of a larger solution, you need to supply a password anytime to open any of the encrypted packages. Why can't you just encrypt the whole solution with one password? I have a solution with 10 encrypted packages. When I hit "Build", I have to enter 10 passwords.
Encrypting credentials is great. Deploying an encrypted package to the server, supplying your password, testing it successfully, scheduling it, and then having it fail during the schedule because it can't decrypt itself SUCKS. It seems to randomly do this, and I have redeployed a given package several times before it is actually able to decrypt the package credentials successfully during a scheduled job.
Windows authentication only? Maybe this is a security feature, but it makes it a real pain in the butt to remotely manage the server. It basically forces me to use remote desktop. Does it really matter that I can't access SSIS when I have access directly to the DB Engine???
DTS Support. DTS was pretty ugly, but it worked, and was pretty straightforward. Why didn't they provide the DTS 2000 package designer WITH SSIS??? Now I need to go download it and install it with admin privileges.
UPSERTS??? I replicate some data to an external database, and upserting to that database is SUCH A PAIN. Why isn't this functionality built in? Why can't I just say "This is the key column. Update if exists, create if it doesn't".
Are these valid issues, or am I just to new to the product to know how to do things the right way?
Do others have the same issues, or other issues?
Are there easy alternatives to using SSIS?
The following links from #SQLServerSleuth might shed some light on the situation - a back and forth re: SSIS in 2005. Are you on SQL 2008, or still working with SQL 2005? This picture changed a bit in 2008.
SSIS' 15 Faults
SSIS: The backlash (1)
SSIS: The backlash (2)
SSIS: A response from Microsoft to some growing criticism
In my system it was overall easier to just develop data loads in C#. The loads are rock solid and do not change unless we want them to change, so we don't spend any more time after we are done with development.
Check out Package Configuration files for some of the security issues.
Do you actually need the encryption on each package? You can say no encryption storage if you aren't storing an ftp or other authentication passwords. Configuration files are also a good idea. I recommend www.pragmaticworks.com/products/Business-Intelligence/BIxpress/ BIExpress as it will create all the config files for you, log the crap out of your packages and provide awesome blow your socks off graphical reporting for next to nothing cost wise...
let me preface this by saying that SSIS sucks. it's a pain to work with, manage, and develop. While there are tools that make things better, these features should have been included from the start. let me also say, that I haven't found (and don't believe there currently exists) a better tool for scalable high performance data loads than SSIS.
1,2: set the package to "Don't Save Sensitive", and use either configurations, or "Set Values" inside whatever execution context you are using.
3: agreed, partially. browsing the package store would be nice through sql auth, but executing the package should absolutely not be allowed (under what context do you execute?)
You can always execute through the job.
4: not related to SSIS
besides, DTS is deprecated, and in most ways, considerably less flexible and harder to manage than even SSIS.
5: Upserts are admittedly trickier than they could be, but if done right, it can work flawlessly:
either use a lookup to determine whether you need to insert or update, and define your logic accordingly.
side note: seriously consider setting up a package template. if done right, you can alleviate many of these concerns from the start. I may need to publicly post my package template at some point.
We ran into many of the same issues, especially #5, so I agree these are valid. In general, I found SSIS to be a massive pain to work with.
For 1, 2, I use package configurations.
For 5 you can use a slowly changing dimension task or the third-party table difference component. I personally prefer to load to a staging table and code the UPSERT in SQL.
I've been using SSIS fairly intesively on a DW project for the last 2 years and I find it to have a few quirks but it is far more powerful than DTS.
Related
We are working on a project that uses database first model.
At the beginning of the project we decided to use a central db, all developers were using this db.
But at the middle of the project, developers decided to use own DB's, they are cloning the DB to their local server. So, they are using and changing in their local, but migrating the DB changes to centralized DB is so hard, we are doing it by hand.
Is there a good way to manage db changes to centralized db?
but migrating the DB changes to centralized DB is so hard, we are doing it by hand.
And it never occured to you that this will not scale? We run a 4 people team with now 9 environments - it is impossible to do so by hand. On top you WILL make errors, and if those happen during production deploy that is bad bad bad.
That said, this is a problem solved like 50 years ago. Learn from history. Here is our approach:
DB Change scripts.
In folders numbered by sprint
There is a table SchemaSync in every db tracknig which script executed and the checksum
Missing/changed scripts are executed during deploy or manually (from a zip file) via powershell cmdlet.
This is seriously a standard solution predating SSDT and migrations for ages and likely older than you are. There are even open source implementations https://github.com/DbUp/DbUp and commercial libraries (one of them actually IN VISUAL STUDIO so you can try it out) https://www.red-gate.com/products/sql-development/sql-change-automation/
Like the only way to go.
SSDT: Loves dropping fields. How do you handle db migrations with that? Posible with multiple steps? Not possible. SSDT idea of change scripts is ignorant to data moves. If you ever do a larger refactoring (which I happen to do a lot) then you know that sometimes it is not "create new field" but a number of steps that partially run complex code. SSDT totally can not handle it.
After having used an application for over 10 years, and been constantly limited by its lack of extensibility, we have decided to rewrite it fully from scratch. Because the new architecture differs from the old application, the database is also different. Here comes the problem: Is there any industrial process for migrating the data from the previous database to the new one? Some tables are alike, others are not. Overall, we need a process that will help us make sure that no data or logical constraints is lost during the migration.
PS: The old and new database are both Oracle databases.
Although you don't specify this in your question, I assume that you're going to develop the new version of your application/database, and then at some switchover point you need to migrate all of the live data from your old database into your new database.
If this is the case, then you're really asking about two distinct processes: the migration (with some modifications) of the database structure, followed later by the migration of the data itself.
For the first process, the best tool is you, the developer (I don't mean you're a "tool" - you know what I mean). You could bring over the structure of the old database and then change it as necessary for the new version; however, this approach in general tends to leave too much of the old structure behind. I think it's better to take advantage of the situation and rebuild the database from the ground up, using the original database just as a general reference.
For the second process, I would treat the data migration as a separate task requiring a separately-written and -tested application. This application could be a set of scripts or a compiled application or whatever is most convenient for you. Because your old and new databases will not have the same structure (and may in fact be very different), there are no commercial tools out there that will handle this task for you automagically. By treating this as a distinct application that you write yourself, you can test the data conversion process many times before your "go live" date.
I've heard of several different ways to attack problems such as this. The most simple solution I've seen is to use a Microsoft Access database and use ODBC connection to connect to both the new and old Oracle databases. You can then use Access to migrate and transform the data as you need.
The more elegant solution involes installing Microsoft SQL Server Development tools. You can use Business Intelligence Development Studio to create a SSIS package with two Oracle endpoints. SSIS can handle the heavy lifting of transforming the data between the databases and you can run the package locally so you don't have to have an instance of SQL Server running anywhere.
There's a tutorial series for SSIS at:
http://www.developerdotstar.com/community/node/364
You might also want to check out Oracle Warehouse Builder (OWB). The name is a little confusing, but it's Oracle's ETL (Extract, Transform, and Load) package. I've never used it personally, but it might do what you're looking to do as well.
I've been searching for some time for a good solution to implement the idea of managing schema on an SQL Server Compact 3.5 database.
I know of several ways of managing schema on SQL Server Express, SQL Server Standard, SQL Server Enterprise, but the Compact Edition doesn't support the necessary tools required to use the same methodology.
Any suggestions/tips?
I should expand this to say that it is for 100+ clients with wrapperware software. As the system changes, I need to publish update scripts alongside the new binaries to the client. I was looking for a decent method by which to publish this without having to just hand the client a script file and say "Run this in SSMSE". Most clients are not capable of doing such a beast.
A buddy of mine disclosed a partial script on how to handle the SQL Server piece of my task, but never worked on Compact Edition. It looks like I'll be on my own for this.
What I think that I've decided to do, and it's going to need a "geek week" to accomplish, is to write some sort of a tool much like how WiX and NAnt works, so that I can just write an overzealous XML document to handle the work.
If I think that it is worthwhile, I'll publish it on CodePlex and/or The Code Project because I've used both sites a bit to gain better understanding of concepts for jobs I've done in the past, and I think it is probably worthwhile to give back a little.
Edit on 5/3/2010:
If someone is willing to "name" the project, I'll upload the dirty/nasty version that I've written for MS SQL to CodePlex so that maybe we can start hacking out a version of SQL Compact. Although, I think with the next revision of the initial application that I was planning, I'm going to be abandoning SQL Compact and just use XML Files for storage, as the software is being converted from an Installable package to being a Silverlight application. Silverlight just gives a better access strategy.
I am currently looking into Migrator.Net.
This allows you to write changes to your database, called migrations, directly in C#.
These migrations can contain everything from simple table additions/drops, column modifications, to complicated data update code.
When your application boots, it can verify what version the database is currently in and apply any migrations that are required to bring it up to date. All this is handled automatically. The code to run this update is as simple as:
Assembly asm = Assembly.Load("LocalModels.migration");
Migrator m = new Migrator("SqlServerCe", "Data Source=LocalModels.sdf", asm, false);
m.MigrateToLastVersion();
I am having a couple minor issues with the Compact support (it assumes the default schema is dbo). But I don't think it will be too difficult to fix them.
some random thoughts (not sure I can fully answer though)
the Microsoft Sync Framework is one option. I haven't had a chance to fully appreciate what it can do once you've deployed it after the initial first time (which seems to work fine). There's a MSDN site for it here
You can execute scripts on a mobile device, but not through something like SQL Management Studio, so in theory you could manage/maintain T-SQL scripts but the down side is that the T-SQL would be convoluted (to CE's supported statements) and I don't know a way to "automate" execution - but the Sync Framework might hold some answers..
If one of your key criteria is going to be working efficiently over a small pipe, the only real choice you have is to store a DB Schema Version (maybe somehow tied to the scripts checked into your CMS) and when an update is needed, the change scripts are sent over the wire and applied in order. You would probably want to keep a log in your DB as well of these scripts being applied so you can gracefully handle disconnects, reboots and other potentially nasty problems.
Is SQL Server Management Studio any use for you?
http://technet.microsoft.com/en-us/library/ms172933.aspx
So recently on a project I'm working on, we've been struggling to keep a solution's code base and the associated database schema in synch (Database = SQL Server 2008).
Database changes occur fairly regularly (adding columns, constraints, relationships, etc) and as a result it's not uncommon for people to do a 'Get Latest' from source control and
find that they also need to rebuild the database as well (and sometimes they forget to do the latter).
We're not using VSTS: Database Edition (DataDude) but the standard Visual Studio database project with a script (batch file) which tears down and recreates the database from T-SQL scripts. The solution is a .Net & ASP.net solution with LINQ to SQL underlying as the ORM.
Anyone have ideas on an approach to take (automated or not) which would keep everyone up to date with the latest database schema?
Continuous integration with MSBuild is an option, but only helps pick up any breaking changes committed, it doesn't really help in the scenario I highlighted above.
We are using Team Foundation Server, if that helps..
We try to work forward from the creation scripts.
i.e a change to the database is not authorised unless the script has been tested and checked into source control.
But this assumes that the database team is integrated with your app team which is usually not the case in a large project...
(I was tempted to answer this "with great difficulty")
EDIT: Tools won't help you if your process isn't right.
Ok although its not the entire solution, you should include an assertion in the Application code that links up to the database to assert the correct schema is being used, that way at least it becomes obvious, and you avoid silent bugs and people complaining that stuff went crazy all of the sudden.
As for the schema version, you could use some database specific functionality if available, but i personally prefer to declare a schema version table and keep the version number in there, that way its portable and can be checked with a simple select statement
have a look at DB Ghost - you can create a dbp using the scripter in seconds and then manage all your database code with the change manager. www.dbghost.com
This is exactly what DB Ghost was designed to handle.
We basically do things the way you are, with the generation script checked into source control as well. I'm the designated database master so all changes to the script itself are done through me. People send me scripts of the changes they have made, I update my master copy of the schema, run a generate scripts (SSMS) to produce the new DB script, and then check it in. I keep my copy of the code current with any changes that are being made elsewhere. We're a small shop so this works pretty well for us. I realize that it probably doesn't scale.
If you are not using Visual Studio Database Professional Edition, then you will need another tool that can break the database down into its elemental pieces so that they are managable and changeable in an easier manner.
I'd recommend seriously considering Redgate's SQL tools if you want to maintain sanity over all your database changes and updates.
SQL Packager
SQL Multi Script
SQL Refactor
Use a tool like RedGate SQL Compare to generate the change schema between any given version of the database. You can then check that file into source code control
Have a look at this question: dynamic patching of databases. I think it's similar enough to your problem to be helpful.
My solution to this problem is simple. Define everything as XML, and make sure that both the database, the ORM and the UI are generated from this XML, no exceptions. That way, you can use code generation tools to quickly regenerate the database creation script, which will alter your schema while (hopefully) preserving some data. It takes some effort to do, but the net result is well worth it.
I wonder how you guys manage deployment of a database between 2 SQL Servers, specifically SQL Server 2005.
Now, there is a development and a live one. As this should be part of a buildscript (standard windows batch, even do with current complexity of those scripts, i might switch to PowerShell or so later), Enterprise Manager/Management Studio Express do not count.
Would you just copy the .mdf File and attach it? I am always a bit careful when working with binary data, as this seems to be a compatiblity issue (even though development and live should run the same version of the server at all time).
Or - given the lack of "EXPLAIN CREATE TABLE" in T-SQL - do you do something that exports an existing database into SQL-Scripts which you can run on the target server? If yes, is there a tool that can automatically dump a given Database into SQL Queries and that runs off the command line? (Again, Enterprise Manager/Management Studio Express do not count).
And lastly - given the fact that the live database already contains data, the deployment may not involve creating all tables but rather checking the difference in structure and ALTER TABLE the live ones instead, which may also need data verification/conversion when existing fields change.
Now, i hear a lot of great stuff about the Red Gate products, but for hobby projects, the price is a bit steep.
So, what are you using to automatically deploy SQL Server Databases from Test to Live?
I've taken to hand-coding all of my DDL (creates/alter/delete) statements, adding them to my .sln as text files, and using normal versioning (using subversion, but any revision control should work). This way, I not only get the benefit of versioning, but updating live from dev/stage is the same process for code and database - tags, branches and so on work all the same.
Otherwise, I agree redgate is expensive if you don't have a company buying it for you. If you can get a company to buy it for you though, it really is worth it!
For my projects I alternate between SQL Compare from REd Gate and the Database Publishing Wizard from Microsoft which you can download free
here.
The Wizard isn't as slick as SQL Compare or SQL Data Compare but it does the trick. One issue is that the scripts it generates may need some rearranging and/or editing to flow in one shot.
On the up side, it can move your schema and data which isn't bad for a free tool.
Don't forget Microsoft's solution to the problem: Visual Studio 2008 Database Edition. Includes tools for deploying changes to databases, producing a diff between databases for schema and/or data changes, unit tests, test data generation.
It's pretty expensive but I used the trial edition for a while and thought it was brilliant. It makes the database as easy to work with as any other piece of code.
Like Rob Allen, I use SQL Compare / Data Compare by Redgate. I also use the Database publishing wizard by Microsoft. I also have a console app I wrote in C# that takes a sql script and runs it on a server. This way you can run large scripts with 'GO' commands in it from a command line or in a batch script.
I use Microsoft.SqlServer.BatchParser.dll and Microsoft.SqlServer.ConnectionInfo.dll libraries in the console application.
I work the same way Karl does, by keeping all of my SQL scripts for creating and altering tables in a text file that I keep in source control. In fact, to avoid the problem of having to have a script examine the live database to determine what ALTERs to run, I usually work like this:
On the first version, I place everything during testing into one SQL script, and treat all tables as a CREATE. This means I end up dropping and readding tables a lot during testing, but that's not a big deal early into the project (since I'm usually hacking the data I'm using at that point anyway).
On all subsequent versions, I do two things: I make a new text file to hold the upgrade SQL scripts, that contain just the ALTERs for that version. And I make the changes to the original, create a fresh database script as well. This way an upgrade just runs the upgrade script, but if we have to recreate the DB we don't need to run 100 scripts to get there.
Depending on how I'm deploying the DB changes, I'll also usually put a version table in the DB that holds the version of the DB. Then, rather than make any human decisions about which scripts to run, whatever code I have running the create/upgrade scripts uses the version to determine what to run.
The one thing this will not do is help if part of what you're moving from test to production is data, but if you want to manage structure and not pay for a nice, but expensive DB management package, is really not very difficult. I've also found it's a pretty good way of keeping mental track of your DB.
If you have a company buying it, Toad from Quest Software has this kind of management functionality built in. It's basically a two-click operation to compare two schemas and generate a sync script from one to the other.
They have editions for most of the popular databases, including of course Sql Server.
I agree that scripting everything is the best way to go and is what I advocate at work. You should script everything from DB and object creation to populating your lookup tables.
Anything you do in UI only won't translate (especially for changes... not so much for first deployments) and will end up requiring a tools like what Redgate offers.
Using SMO/DMO, it isn't too difficult to generate a script of your schema. Data is a little more fun, but still doable.
In general, I take "Script It" approach, but you might want to consider something along these lines:
Distinguish between Development and Staging, such that you can Develop with a subset of data ... this I would create a tool to simply pull down some production data, or generate fake data where security is concerned.
For team development, each change to the database will have to be coordinated amongst your team members. Schema and data changes can be intermingled, but a single script should enable a given feature. Once all your features are ready, you bundle these up in a single SQL file and run that against a restore of production.
Once your staging has cleared acceptance, you run the single SQL file again on the production machine.
I have used the Red Gate tools and they are great tools, but if you can't afford it, building the tools and working this way isn't too far from the ideal.
I'm using Subsonic's migrations mechanism so I just have a dll with classes in squential order that have 2 methods, up and down. There is a continuous integration/build script hook into nant, so that I can automate the upgrading of my database.
Its not the best thign in the world, but it beats writing DDL.
RedGate SqlCompare is a way to go in my opinion. We do DB deployment on a regular basis and since I started using that tool I have never looked back.
Very intuitive interface and saves a lot of time in the end.
The Pro version will take care of scripting for the source control integration as well.
I also maintain scripts for all my objects and data. For deploying I wrote this free utility - http://www.sqldart.com. It'll let you reorder your script files and will run the whole lot within a transaction.
I agree with keeping everything in source control and manually scripting all changes. Changes to the schema for a single release go into a script file created specifically for that release. All stored procs, views, etc should go into individual files and treated just like .cs or .aspx as far as source control goes. I use a powershell script to generate one big .sql file for updating the programmability stuff.
I don't like automating the application of schema changes, like new tables, new columns, etc. When doing a production release, I like to go through the change script command by command to make sure each one works as expected. There's nothing worse than running a big change script on production and getting errors because you forgot some little detail that didn't present itself in development.
I have also learned that indexes need to be treated just like code files and put into source control.
And you should definitely have more than 2 databases - dev and live. You should have a dev database that everybody uses for daily dev tasks. Then a staging database that mimics production and is used to do your integration testing. Then maybe a complete recent copy of production (restored from a full backup), if that is feasible, so your last round of installation testing goes against something that is as close to the real thing as possible.
I do all my database creation as DDL and then wrap that DDL into a schema maintainence class. I may do various things to create the DDL in the first place but fundamentally I do all the schema maint in code. This also means that if one needs to do non DDL things that don't map well to SQL you can write procedural logic and run it between lumps of DDL/DML.
My dbs then have a table which defines the current version so one can code a relatively straightforward set of tests:
Does the DB exist? If not create it.
Is the DB the current version? If not then run the methods, in sequence, that bring the schema up to date (you may want to prompt the user to confirm and - ideally - do backups at this point).
For a single user app I just run this in place, for a web app we currently to lock the user out if the versions don't match and have a stand alone schema maint app we run. For multi-user it will depend on the particular environment.
The advantage? Well I have a very high level of confidence that the schema for the apps that use this methodology is consistent across all instances of those applications. Its not perfect, there are issues, but it works...
There are some issues when developing in a team environment but that's more or less a given anyway!
Murph
I'm currently working the same thing to you. Not only deploying SQL Server databases from test to live but also include the whole process from Local -> Integration -> Test -> Production. So what can make me easily everyday is I do NAnt task with Red-Gate SQL Compare. I'm not working for RedGate but I have to say it is good choice.