I have several SSRS reports that need to be transferred to another server. Currently, all reports and SSRS reside on the same exact server as SSMS and all databases. I was wondering if there is a simple way to essentially take everything I have on SSRS (All Reports) and transfer them along with their data sources to the new server that I have. I know downloading all reports and uploading them to the new server is an option but I was wondering if their is an easier/more logical way to go about this process. After completion, SSRS and the SQL Server will be on different servers but still able to work together.
I have had really great experiences with an old (2007) program called "SSRS Scripter" (from Jasper Smith). It seems like it has been absorbed into a newer product called SqlServerFineBuild (which I have never used). The original stuff is a little hard to find, but very nice to work-with, imo.
Microsoft also has another product called Reporting Services Migration Tool (free) https://www.microsoft.com/en-us/download/details.aspx?id=29560 It is a command-line util and isn't the easiest or most-helpful. It has a UI-mode, but it isn't very user-friendly.
Both of these will only extract report files and settings from an existing site. You still need to use VStudio (+ reporting add-in) (and then some RS project work/design/config) to re-deploy.
I'm looking for the best approach (or a couple of good ones to choose from) for extracting from a Progress database (v10.2b). The eventual target will be SQL Server (v2008). I say "eventual target", because I don't necessarily have to connect directly to Progress from within SQL Server, i.e. I'm not averse to extracting from Progress to a text file, and then importing that into SQL Server.
My research on approaches came up with scenarios that don't match mine;
Migrating an entire Progress DB to SQL Server
Exporting entire tables from Progress to SQL Server
Using Progress-specific tools, something to which I do not have access
I am able to connect to Progress using ODBC, and have written some queries from within Visual Studio (v2010). I've also done a bit of custom programming against the Progress database, building a simple web interface to prove out a few things.
So, my requirement is to use ODBC, and build a routine that runs a specific query on a daily basis daily. The results of this query will then be imported into a SQL Server database. Thanks in advance for your help.
Update
After some additional research, I did find that a Linked Server is what I'm looking for. Some notes for others working with SQL Server Express;
If it's SQL Server Express that you are working with, you may not see a program on your desktop or in the Start Menu for DTS. I found DTSWizard.exe nested in my SQL Server Program Files (for me, C:\Program Files (x86)\Microsoft SQL Server\100\DTS\Binn), and was able to simply create a shortcut.
Also, because I'm using the SQL Express version of SQL Server, I wasn't able to save the Package I'd created. So, after creating the Package and running it once, I simply re-ran the package, and saved off my SQL for use in teh future.
Bit of a late answer, but in case anyone else was looking to do this...
You can use linked server, but you will find that the performance won't be as good as directly connecting via the ODBC drivers, also the translation of the data types may mean that you cannot access some tables. The linked server might be handy though for exploring the data.
If you use SSIS with the ODBC drivers (you will have to use ADO.NET data sources) then this will perform the most efficiently, and as well you should get more accurate data types (remember that the data types within progress can change dynamically).
If you have to extract a lot of tables, I would look at BIML to help you achieve this. BIML (Business Intelligence Markup Language) can help you create dynamically many SSIS packages on the fly which can be called from a master package. This master package can then be scheduled or run ad-hoc and so can any of the child packages as needed.
Can you connect to the Progress DB using OLE? If so, you could use SQL Server Linked Server to bypass the need for extracting to a file which would then be loaded into SQL Server. Alternately, you could extract to Excel and then import from Excel to SQL Server.
Silly sounding question, I know... Let me lay some groundwork first.
I have successfully created a database project comprised of the hundreds of tables, stored procedures, indexes, et.al. that make up our production database.
I have successfully added the solution to source control (TFS).
I have made a change (as a test) to some of the objects and generated a deployment script, and the whole system is very impressive, I must say. But it seems the strength of VS 2010, from a DB perspective is deployment, and not necessarily development.
I am totally baffled on the day-to-day workflow involved in database/TSQL development using Visual Studio. Let's suppose I need to add a few columns to a table, and modify related stored procedures to return/update this data for these columns.
While it's easy enough to modify all the scripts in my database model, I'd like to be able to isolate them against a dev database where I can do some testing... But it's as simple as not being to update a proc if it exists without manually changing the script to an ALTER (or adding DROP code prior to the CREATE). Having to do this once or twice is a non-issue, but in a real dev environment, we do this all day long.
Perhaps the answer is to perform frequent deployments to the dev server, as I debug and make changes to procs, for instance? Quite a bit of overhead; I could execute the necessary scripts manually in a few seconds, building and deploying takes a few minutes. Plus, if three of us are deploying different changes to a dev DB, wouldn't we overwrite each other's modifications?
Sorry to be so longwinded, but I can't help but think I am missing something simple here.
Are there any books/tutorials/webinars that showcase this type of approach to actual development?
I think you've hit the nail on the head. In order to test your modified stored procedures, you have to go through the deployment step to update your database. That's the drawback of the offline development model.
Here at Red Gate we've had numerous requests to make SQL Source Control support the Database Project, which would allow developers to benefit from the 'online' development model whilst still benefiting from the Database Project features.
[EDIT] We've added 'Beta' support for the database project in SQL Source Control, which allows connected SSMS development against the database project format. Simple link to the folder with eh .sqlproj file from SQL Source Control and start developing! [/EDIT]
In the meantime, you'll have to keep deploying to dev on a regular basis!
An alternative is to develop on a real database, and use the Schema Compare feature to synchronize back to your Database Project. Schema Compare is available in the Premium and Ultimate editions of Visual Studio.
David Atkinson
Product Manager
Red Gate Software
We use MS SQL Server and C#. Our database is under sourse control and I will tell you some details of our implementation. We had implemented two operations:
Export database to plain-text files. Database schema files:
tables.sql
relationships.sql
views.sql
...
and table contents files:
Data/table1.txt
Data/table2.txt
...
It is easy to review database changes using source control logs because all these files has plain-text format.
Imlementation is based on classes from namespace Microsoft.SqlServer.Management.Smo.
Import database from this plain-text files. Implementation is strightforward - just execute sql statements from *.sql files, and then execute a bunch of inserts.
So we have two bat-files: create-test-databse.bat and export-test-database.bat. When a developer needs a new test database he just executes the bat-file and waits for a minute.
Every functional test, which needs a database creates a new database, uses it, and then kills it. But I should say that it is not very fast operation. :(
So what instruments do YOU use to put your database under source control?
I mean how do you implement operations "create test database" and "export test database" for example?
I think you are asking two questions here. The first is how to get your database under source control. Your solution is interesting, and I've also used Visual Studio Team Edition for Database Professionals (here's a tutorial I wrote on TDD of stored procedures using it)
The second is how to you set up your database for integration tests. Setting up and tearing down the entire database seems like it might be a bit overkill. There are several solutions out there. I've used DBFit. Roy Osherove published a tool called XtUnit a while back that I haven't played with. And, of course, you could always setup your tests to do a transaction start in the SetUp, and a Rollback during teardown.
We use Visual Studio for Database Professionals. Don't leave home without it.
I use VSTS for DB Pros. You point it at your SQL server and it analyzes your database and creates the individual files for you. You can even have it generate test data for you. The next release will include support for third party providers (think Oracle, MySQL, DB2).
The really great feature in here is the validation. We found that parts of our database were totally broken (they were vestigial, not used by the code anymore). It basically makes it possible to deploy your database on demand.
I wonder how you guys manage deployment of a database between 2 SQL Servers, specifically SQL Server 2005.
Now, there is a development and a live one. As this should be part of a buildscript (standard windows batch, even do with current complexity of those scripts, i might switch to PowerShell or so later), Enterprise Manager/Management Studio Express do not count.
Would you just copy the .mdf File and attach it? I am always a bit careful when working with binary data, as this seems to be a compatiblity issue (even though development and live should run the same version of the server at all time).
Or - given the lack of "EXPLAIN CREATE TABLE" in T-SQL - do you do something that exports an existing database into SQL-Scripts which you can run on the target server? If yes, is there a tool that can automatically dump a given Database into SQL Queries and that runs off the command line? (Again, Enterprise Manager/Management Studio Express do not count).
And lastly - given the fact that the live database already contains data, the deployment may not involve creating all tables but rather checking the difference in structure and ALTER TABLE the live ones instead, which may also need data verification/conversion when existing fields change.
Now, i hear a lot of great stuff about the Red Gate products, but for hobby projects, the price is a bit steep.
So, what are you using to automatically deploy SQL Server Databases from Test to Live?
I've taken to hand-coding all of my DDL (creates/alter/delete) statements, adding them to my .sln as text files, and using normal versioning (using subversion, but any revision control should work). This way, I not only get the benefit of versioning, but updating live from dev/stage is the same process for code and database - tags, branches and so on work all the same.
Otherwise, I agree redgate is expensive if you don't have a company buying it for you. If you can get a company to buy it for you though, it really is worth it!
For my projects I alternate between SQL Compare from REd Gate and the Database Publishing Wizard from Microsoft which you can download free
here.
The Wizard isn't as slick as SQL Compare or SQL Data Compare but it does the trick. One issue is that the scripts it generates may need some rearranging and/or editing to flow in one shot.
On the up side, it can move your schema and data which isn't bad for a free tool.
Don't forget Microsoft's solution to the problem: Visual Studio 2008 Database Edition. Includes tools for deploying changes to databases, producing a diff between databases for schema and/or data changes, unit tests, test data generation.
It's pretty expensive but I used the trial edition for a while and thought it was brilliant. It makes the database as easy to work with as any other piece of code.
Like Rob Allen, I use SQL Compare / Data Compare by Redgate. I also use the Database publishing wizard by Microsoft. I also have a console app I wrote in C# that takes a sql script and runs it on a server. This way you can run large scripts with 'GO' commands in it from a command line or in a batch script.
I use Microsoft.SqlServer.BatchParser.dll and Microsoft.SqlServer.ConnectionInfo.dll libraries in the console application.
I work the same way Karl does, by keeping all of my SQL scripts for creating and altering tables in a text file that I keep in source control. In fact, to avoid the problem of having to have a script examine the live database to determine what ALTERs to run, I usually work like this:
On the first version, I place everything during testing into one SQL script, and treat all tables as a CREATE. This means I end up dropping and readding tables a lot during testing, but that's not a big deal early into the project (since I'm usually hacking the data I'm using at that point anyway).
On all subsequent versions, I do two things: I make a new text file to hold the upgrade SQL scripts, that contain just the ALTERs for that version. And I make the changes to the original, create a fresh database script as well. This way an upgrade just runs the upgrade script, but if we have to recreate the DB we don't need to run 100 scripts to get there.
Depending on how I'm deploying the DB changes, I'll also usually put a version table in the DB that holds the version of the DB. Then, rather than make any human decisions about which scripts to run, whatever code I have running the create/upgrade scripts uses the version to determine what to run.
The one thing this will not do is help if part of what you're moving from test to production is data, but if you want to manage structure and not pay for a nice, but expensive DB management package, is really not very difficult. I've also found it's a pretty good way of keeping mental track of your DB.
If you have a company buying it, Toad from Quest Software has this kind of management functionality built in. It's basically a two-click operation to compare two schemas and generate a sync script from one to the other.
They have editions for most of the popular databases, including of course Sql Server.
I agree that scripting everything is the best way to go and is what I advocate at work. You should script everything from DB and object creation to populating your lookup tables.
Anything you do in UI only won't translate (especially for changes... not so much for first deployments) and will end up requiring a tools like what Redgate offers.
Using SMO/DMO, it isn't too difficult to generate a script of your schema. Data is a little more fun, but still doable.
In general, I take "Script It" approach, but you might want to consider something along these lines:
Distinguish between Development and Staging, such that you can Develop with a subset of data ... this I would create a tool to simply pull down some production data, or generate fake data where security is concerned.
For team development, each change to the database will have to be coordinated amongst your team members. Schema and data changes can be intermingled, but a single script should enable a given feature. Once all your features are ready, you bundle these up in a single SQL file and run that against a restore of production.
Once your staging has cleared acceptance, you run the single SQL file again on the production machine.
I have used the Red Gate tools and they are great tools, but if you can't afford it, building the tools and working this way isn't too far from the ideal.
I'm using Subsonic's migrations mechanism so I just have a dll with classes in squential order that have 2 methods, up and down. There is a continuous integration/build script hook into nant, so that I can automate the upgrading of my database.
Its not the best thign in the world, but it beats writing DDL.
RedGate SqlCompare is a way to go in my opinion. We do DB deployment on a regular basis and since I started using that tool I have never looked back.
Very intuitive interface and saves a lot of time in the end.
The Pro version will take care of scripting for the source control integration as well.
I also maintain scripts for all my objects and data. For deploying I wrote this free utility - http://www.sqldart.com. It'll let you reorder your script files and will run the whole lot within a transaction.
I agree with keeping everything in source control and manually scripting all changes. Changes to the schema for a single release go into a script file created specifically for that release. All stored procs, views, etc should go into individual files and treated just like .cs or .aspx as far as source control goes. I use a powershell script to generate one big .sql file for updating the programmability stuff.
I don't like automating the application of schema changes, like new tables, new columns, etc. When doing a production release, I like to go through the change script command by command to make sure each one works as expected. There's nothing worse than running a big change script on production and getting errors because you forgot some little detail that didn't present itself in development.
I have also learned that indexes need to be treated just like code files and put into source control.
And you should definitely have more than 2 databases - dev and live. You should have a dev database that everybody uses for daily dev tasks. Then a staging database that mimics production and is used to do your integration testing. Then maybe a complete recent copy of production (restored from a full backup), if that is feasible, so your last round of installation testing goes against something that is as close to the real thing as possible.
I do all my database creation as DDL and then wrap that DDL into a schema maintainence class. I may do various things to create the DDL in the first place but fundamentally I do all the schema maint in code. This also means that if one needs to do non DDL things that don't map well to SQL you can write procedural logic and run it between lumps of DDL/DML.
My dbs then have a table which defines the current version so one can code a relatively straightforward set of tests:
Does the DB exist? If not create it.
Is the DB the current version? If not then run the methods, in sequence, that bring the schema up to date (you may want to prompt the user to confirm and - ideally - do backups at this point).
For a single user app I just run this in place, for a web app we currently to lock the user out if the versions don't match and have a stand alone schema maint app we run. For multi-user it will depend on the particular environment.
The advantage? Well I have a very high level of confidence that the schema for the apps that use this methodology is consistent across all instances of those applications. Its not perfect, there are issues, but it works...
There are some issues when developing in a team environment but that's more or less a given anyway!
Murph
I'm currently working the same thing to you. Not only deploying SQL Server databases from test to live but also include the whole process from Local -> Integration -> Test -> Production. So what can make me easily everyday is I do NAnt task with Red-Gate SQL Compare. I'm not working for RedGate but I have to say it is good choice.