How do I copy the structure only (i.e., empty, no user data) of all tables, views, and indices from one SQL-Server database to new (empty) database?
(If anyone remembers dBase, this was done with "copy struct" for each table. I know also that this could be done by reverse-engineering the structure of the database into SQL statements using a tool like ERWin, but I don't have that either.)
I'm working in a very bureaucratic (maybe even paranoid) client site, in which I can only create temp tables, and only read from the regular tables. But it's really important that I be able to insert and update in a "safe" area.
You can generate scripts to generate a cloned test area for your database Documenting and Scripting Databases
Microsoft's SQL Server supports this scenario through partially abstract tools that can work with the database model, visualize it, edit it, transfer the model from one place to another and even compute differences between models and this way create database schema upgrade script.
The part that I have memorized is that those models are stored in big XML files called *.dacpac and there is a schema compare button available if I open *.sqlproj project in Visual Studio
This is whole architectural concept, more abstract and different from dBase, and you can start learning more about it e.g. at
Microsoft SQL Server - Data-tier applications
(Or try Google: "dacpac deployment")
Related
I have this edmx model (see below) and I would like to add an extra association from Order to Worker *-1 as shown by the red line, problem is, the database has already got lots of data that I don't want to wipe, is it possible to add do this without recreating the tables?
Simply let EF to create a new database in your development environment and use some database diff. tool to get change script from the old database to a new one. VS 2010 Premium and Ultimate contains this diff. tool and you can even use it directly from EDMX designer if you install Database Generation tools power pack.
Another popular diff tool is for example SQL Compare by RedGate.
Just because you have used Entity Designer to start with doesn't mean that you have to totally recreate your database each time.
If you generate the SQL from your changed model, you should be able to find the part referring to your new relationship easily enough. That is the only part you need to run against your database- just split it out into it's own file. If in doubt save a copy of the old SQL file, add your relationship, generate the new file and create a diff to be sure.
Alternatively if you have a good understanding of how EF represents data and relationships you can probably change the database manually. As long as the database and the model are accurately coherent, EF doesn't really care how the database got that way.
I have a master database where we define all information of our software.
It contains
tables
queries
trigger
stored procedures
stored functions
meta data
in the table (content)
At the moment, with every change I manually (with some support from SQL Management Studio) edit files where I have all the CREATE, UPDATE, INSERT statements for the stuff mentioned above. When I have to create a new database I fire-up all the xyz.sql files, which contains my SQL statements.
I know there is a database creation script wizard in management studio, but this for example doesn't include the content data. I also need to make sure the stuff is executed for creation in the right order (e.g. queries , function, etc. last then structure tables are available).
At the moment I was thinking about a .NET project where I start read all the shema tables and then create the files automatically. In Ruby on rails the system creates a shema.rb and for the data yaml files. I tried work with this, but as many tables not created by active record (old c++ stuff also running), this won't work for me.
So does anyone have any hint for me how to do this best or any tool that fits perfect to my demand?
You can do this very easily in .NET using the SMO frameworks.
There are integrated tools for scripting out in dependency order, and you can script out data as well if you desire.
See my answer here for some info and links.
SQL Compare Pro should be able to load up your DDL creation scripts and deploy them to a target in the correct order. In the Edit Project dialog make sure you load your scripts as a Scripts Folder. For the data you'll need to use SQL Data Compare Pro. If you have any trouble or have questions, let me know as I work for Red Gate so will be able to help you with these tools.
I'm a little confused about why you've got UPDATEs given that these scripts create a database from scratch. Shouldn't they all be INSERTs?
SSMS does have the ability to create data scripts as well. You need SSMS 2008 and you need to go to Tasks/Generate Scripts and in the Choose Script Options pane you have to make sure Script Data is set to True.
If you're looking to maintain these scripts as a sensible way to source control your SQL Server objects, you might want to consider SQL Source Control. This will maintain your schema objects AND static data tables as individual .sql files.
"I know there is a database creation script wizard in management studio, but this for example doesn't include the content data."
You have to look carefully! Of course this build-in script engine can include the content data. You just have to click the button labeled "properties" (or something like that) and there you can change all the SMO script options including a full data dump.
This ends up in the script with many INSERT INTO... statements.
In-depth description
Try DbSourceTools.
It is a SQL Management tool designed specifically to script SQL databases to disk ( including data ), and then re-create them using "Deployment Targets".
We are using it for database source control in an agile project.
I am using github for maintaining versions and code synchronization.
We are team of two and we are located at different places.
How can we make sure that our databases are synchronized.
Update:--
I am rails developer. But these days i m working on drupal projects (where database is the center of variations). So i want to make sure that team must have a synchronized database. Also the values in various tables.
I need something which keep our data values synchronized.
Centralized database is a good solution. But things get disturbed when someone works offline
if you use visual studio then you can script your database tables, views, stored procedures and functions as .sql files from a database solution and then check those into version control as well - its what i currently do at my workplace
In you dont use visual studio then you can still script your sql as .sql files [but with more work] and then version control them as necessary
Have a look at Red Gate SQL Source Control - http://www.red-gate.com/products/SQL_Source_Control/
To be honest I've never used it, but their other software is fantastic. And if all you want to do is keep the DB schema in sync (rather than full source control) then I have used their SQL Compare product very succesfully in the past.
(ps. I don't work for them!)
You can use Sql Source Control together with Sql Data Compare to source control both: schema and data. Here is an article from redgate: Source controlling data.
These are some of the possibilities.
Using the same database. Set-up a central database where everybody can connect to. This way you are sure everybody uses the same database all the time.
After every change, export the database and commit it to the VCS. This option requires discipline and manual labor.
Use some kind of other definition of the schema. For example, Doctrine for php has the ability to build the database from a yaml definition which can be stored in the vcs. This can be easier automated then point 2.
Use some other software/script which updates the database.
I feel your pain. I had terrible trouble getting SQL Server to play nice with SVN. In the end I opted for a shared database solution. Every day I run an extensive script to backup all our schema definitions (specifically stored procedures) for version control into text files. Due to the limited number of changes this works well.
I now use this technique for our major project and personal projects too. The only negative is that it relies on being connected all the time. The other answers suggest that full database versioning is very time consuming and I tend to agree. For "live" upgrades we use the Red Gate tools, they do both schema and data compare and it works very well.
http://www.red-gate.com/products/SQL_Data_Compare/. We were using this tool for keeping databases in sync in our company. Later we had some specific demands so we had to write our own code for synchronization. Depends how complex is you database and how much changes is happening. It is much simpler if you have time when no one is working and you can lock database for syncronization.
Check out OffScale DataGrove.
This product tracks changes to the entire DB - schema and data. You can tag versions in any point in time, and return to older states of the DB with a simple command. It also allows you to create virtual, separate, copies of the same database so each team member can have his own separate DB. All the virtual copies are tracked into the same repository so it's super-easy to revert your DB to someone else's version (you simply check-out their version, just like you do with your source control). This means all your DBs can always be synchronized.
Regarding a centralized DB - just like you don't want to work on the same source code, you don't want to be working on the same DB. It means you'll constantly break each other's code and builds each time someone changes something in the DB.
I suggest that you go with a separate DB for each developer, and sync them using DataGrove.
Disclaimer - I work at OffScale :-)
Try Wizardby. This is my personal project, but I've used it in my several previous jobs with great deal of success.
Basically, it's a tool which lets you specify all changes to your database schema in a database-independent manner and then apply these changes to all your databases.
Building and maintaining a database that is then deplyed/developed further by many devs is something that goes on in software development all the time. We create a build script, and maintain further update scripts that get applied as the database grows over time. There are many ways to manage this, from manual updates to console apps/build scripts that help automate these processes.
Has anyone who has built/managed these processes moved over to a Source Control solution for database schema management? If so, what have they found the best solution to be? Are there any pitfalls that should be avoided?
Red Gate seems to be a big player in the MSSQL world and their DB source control looks very interesting:
http://www.red-gate.com/products/solutions_for_sql/database_version_control.htm
Although it does not look like it replaces the (default) data* management process, so it only replaces half the change management process from my pov.
(when I'm talking about data, I mean lookup values and that sort of thing, data that needs to be deployed by default or in a DR scenario)
We work in a .Net/MSSQL environment, but I'm sure the premise is the same across all languages.
Similar Questions
One or more of these existing questions might be helpful:
The best way to manage database changes
MySQL database change tracking
SQL Server database change workflow best practices
Verify database changes (version-control)
Transferring changes from a dev DB to a production DB
tracking changes made in database structure
Or a search for Database Change
I look after a data warehouse developed in-house by the bank where I work. This requires constant updating, and we have a team of 2-4 devs working on it.
We are fortunate because there is only the one instance of our "product", so we do not have to cater for deploying to multiple instances which may be at different versions.
We keep a creation script file for each object (table, view, index, stored procedure, trigger) in the database.
We avoid the use of ALTER TABLE whenever possible, preferring to rename a table, create the new one and migrate the data over. This means that we don't have to look through a history of ALTER scripts - we can always see the up to date version of every table by looking at its create script. The migration is performed by a separate migration script - this can be partly auto-generated.
Each time we do a release, we have a script which runs the create scripts / migration scripts in the appropriate order.
FYI: We use Visual SourceSafe (yuck!) for source code control.
I've been looking for a SQL Server source control tool - and came across a lot of premium versions that do the job - using SQL Server Management Studio as a plugin.
LiquiBase is a free one but i never quite got it working for my needs.
There is another free product out there though that works stand along from SSMS and scripts out objects and data to flat file.
These objects can then be pumped into a new SQL Server instance which will then re-create the database objects.
See gitSQL
Maybe you're asking for LiquiBase?
I have been googling a lot and I couldn't find if this even exists or I'm asking for some magic =P
Ok, so here's the deal.
I need to have a way to create a "master-structured" database which will only contain the schemas, structures, tables, store procedures, udfs, etc, everything but real data in SQL SERVER 2005 (if this is available in 2008 let me know, I could try to convince my client to pay for it =P)
Then I want to have several "children" of that master db which implement those schemas, tables, etc but each one has different data.
So when I need to create a new stored procedure or something like that, I just create it on the master database (and of course it's available on its children).
Actually I have several different databases with the same schema and different data. But the problem is to maintain congruency between them. Everytime I create a script to create some SP or add some index or whatever, I have to execute it in every database, and sometimes I could miss one =P
So let's say you have a UNIVERSE (would be the master db) and the universe has SPACES (each one represented by a child db). So the application I'm working on needs to dynamically "clone" SPACES. To do that, we have to create a new database. Nowadays I'm creating a backup of the db being cloned, restoring it as a new one and truncate the tables.
I want to be able to create a new "child" of the "master" db, which will maintain the schemas and everything, but will start with empty data.
Hope it's clear... My english is not perfect, sorry about that =P
Thanks to all!
What you really need is to version-control your database schema.
See do-you-source-control-your-databases
If you use SQL Server, I would recommend dbGhost - not expensive and does a great job of:
synchronizing 2 databases
diff-ing 2 databases
creating a database from a set of scripts (I would recommend this version).
batch support, so that you can upgrade all your databases using a single batch
You can use this infrastructure for both:
rolling development versions to test, integration and production systems
rolling your 'updated' system to multiple production deployments (especially in a hosted environment)
I would write my changes as a sql file and use OSQL or SQLCMD via a batchfile to ensure that I repeatedly executed on all the databases without thinking about it.
As an alternative I would use the VisualStudio Database Pro tools or RedGate SQL compare tools to compare and propogate the changes.
There are kludges, but the mainstream way to handle this is still to use Source Code Control (with all its other attendant benefits.) And SQL Server is increasingly SCC friendly.
Also, for many (most robust) sites it's a per-server issue as much as a per-database issue.
You can put things in master like SPs and call them from anywhere. As far as other objects like tables, you can put them in model and new databases will get them when you create a new database.
However, in order to get new tables to simply pop up in the child databases after being added to the parent, nothing.
It would be possible to create something to look through the databases and script them from a template database, and there are also commercial tools which can help discover differences between databases. You could also have a DDL trigger in the "master" database which went out and did this when you created a new table.
If you kept a nice SPACES template, you could script it out (without data) and create the new database - so there would be no need to TRUNCATE. You can script it out from SQL or an external tool.
Little trivia here. The mssqlsystemresource database works as you describe: is defined once and 'appears' in every database as the special sys schema. Unfortunately the special 'magic' needed to get this working is not available to the user databases. You'll have to use deployment techniques to keep your schema in synk. That is, apply the changes to every database as the other answers already suggested.
In theory, you could put a trigger on your UNIVERSE.sysobjects table (assuming SQL Server), and then you could enumerate master.dbo.sysdatabases to find all the child databases. If you have a special table that indicates it's a child database, you can reference child.dbo.sysobjects to find it.
Make no mistake, it would be difficult to implement. But it's one way you could do it.