The way I develop may not be correct, any advice welcome.
At the moment I have a WPF application that uses a SQL2008 database. I have a copy of the database on a laptop and on my home machine. My application is versioned using SVN and I am obviously able go from the work laptop to the home machine and update/commit as required to ensure I am using the latest code for the application.
However the database is a different story in that any change I make I create a backup and then transfer the backup to the other machine etc. This way I get the data and the changes made on each system. In order to do this the database connection using a different connectionstring and I change a setting in my app to use a different connection based on my location.
I have now started to use LINQ to SQL and DBML files in my application, and finally getting to the question, I don't know how I can change the connectionstring it uses in code so it will use the correct database in the DBML.
Also, is there a better way to transfer the database so I don't need to do the backups and restores? The only reason why I have not versioned the Schema is because I am not sure how that would handle my data as this is key to my development, ie various environment settings etc are stored in the DB and brought through at runtime.
Your Statement:
I have now started to use LINQ to SQL and DBML files in my application, and finally getting to the question, I don't know how I can change the connectionstring it uses in code so it will use the correct database in the DBML.
Yes it's possible.
MYDataContext mycontext = new MYDataContext("Your Connection String");
There is a Constructor where you can chage the Connectionstring.
This is such a common problem, and I have never found a minimal and clean solution to it. How to keep all the values and variables and databases and source files in sync between machines?
Well SVN works great for the source files.
For the database, I TRY to just use one DB if we can get away with it. All the devs point to one machine that hosts the db, then we aren't wasting time with DB setup and merging. If that's not possible, then we usually just end up dumping the database when there is a change and distributing the .bak file around. You can try adding this file to SVN, and it works. you can even have the DB dump to a schedule so that SVN is always getting a new copy. But it's still too much work to keep restoring a db over and over. Perhaps you could hook in some scripting to SVN (we use Tortise for windows) and have a job that would do that automatically. That'd be nice.
For the config files - I do ASP.NET so I have web.config, connectionstrings.config, etc, I do one of two things - either just manually copy sections that need to be changed between machines and comment out the part that doesn't need to be used (clunky), or I've at times written ConfigurationSettings helper objects that diagnose a config key to decide what setting to use, based on the current machine name. eg:
Say my current machine is DEV1. The server is SERVER1. I'll have config keys with names like DEV1.connections.sqlserver and SERVER1.connections.sqlserver. In the code I'll use the helper method GetConfig("connections.sqlserver"). GetConfig figures out which key to use based on the current machine name.
Using this method, I don't have to keep remembering to monkey around with the dozen .configs every time I upload to the server or change things. But I DO have to make a duplicate key for every machine that will be running the application, which can get a bit much. For large teams, instead of using machine names, I use group names and have a config key that assigns machine names to a group - with the idea that every machine in the group will have that application set up in an identical fashion - same file paths etc.
Now onto your second question about LINQ - when you create a linq dbml, it will add a connection string to your config. you just have to make sure that you find this connectionstring and copy it into your active application. eg:
I have a solution that has 2 projects:
1 - website
2 - library
I put the dbml into the library project. If I go and look into the App.config of the library project, I'll see the connectionstring that LINQ wants to use. If I copy this connectionstring into the website's connectionstrings.confing file, when I reference the library and run the website, LINQ will be able to see the connectionstring it wants to use.
You can try Sql Server Merge Replication and use SQL Compact 3.5 as your laptop database and use master as your work/home machine database. However you may do this with only Sql Server Standard Edition.
Other option is , Microsoft Sync Framework.. here..
http://msdn.microsoft.com/en-us/sync/default.aspx
You could use red_gate's SQL COmpare and SQLDataCompare to script out changes to the database. You should be in the habit of scripting database changes anyway as that is what you will need to do when it is time to move changes to prod. I would also make sure all database changes are in SVN, we don't make any changes to the database ever without a script in source control.
I ended up just using multiple connection strings and then manually changing the connection on the dbml file whenever I moved locations. However I also have some code in place to programmatically change it based on the project setting for the location.
I haven't really got a good solution to the transferring of the databases and continue to use the backup and restore method.
Related
I'm working with:
VS2013 Professional, Microsoft SQL Server 2012 - 11.0.5058.0 (X64)
I have kind of a two part question. What I'm wanting to achieve is: I want to, as seamlessly as possible, to be able to work on the same project on my work PC and home PC. As of right now, I am using online hosted Subversion for source control which is working fine for application code. The part I have no control over at the moment is the database. I would like if I could get "all" database changes made at either work or home to synch to my other machine.
By database changes, I mean:
Schema Changes
Data within specific "Application" tables (I obviously
do not intend to synch data in all tables)
I followed this just to test getting a DB schema into my project and under source control:
https://msdn.microsoft.com/en-us/library/aa833194%28v=vs.100%29.aspx
It seems to work fine. However, that covers schema changes when working on one machine. If I then go home and want to:
either build from new or update changes to the schema on my home machine, or
update data in base "Application" tables
...I have no clue how to do that, or if it is even possible?
I would think there should be a simple (ha!) way for making the schema changes flow through easily?
But changes to app tables might be harder - I'm happy to write a sql script to manage that, but I'd like to be able to have that script automatically run when I do a "refresh" my local copy of the database.
For schema changes, there are good blogs out there on using SSDT/DataDude/VS DB Projects. Jamie Thomson has written quite a few times on his experiences. I've written up my experiences here: http://schottsql.blogspot.com/2013/10/all-ssdt-articles.html
For data - you can use the native "Data Compare" option under the "SQL" menu in SSDT. It's not perfect, but it can help. Overall, though, what you'd want is one of a couple of things:
1. Extract data from the shared system, write a task to populate that - batch files w/ BCP, SSIS, or some apps that can actually generate T-SQL for you.
2. Write it yourself, being sure to guard against attempts to insert duplicate data and ensuring the key values remain unchanged.
3. Buy a copy of Red-Gate's SQL Data Compare Pro. You can save the compare options and can then execute those through the command line.
If you need this for multiple developers, option 1 or 2 is probably the best way to go, though you can use SQL Data Compare to get you started with a pretty good script. You should also be able to use something like Mladen Prajdic's SSMS Tools Pack to script result sets to T-SQL inserts that you could re-use.
If you use one of those options and combine it with a post-deploy script (maybe even one that only runs if this is a "new" build), you should be off to a good start.
I am using github for maintaining versions and code synchronization.
We are team of two and we are located at different places.
How can we make sure that our databases are synchronized.
Update:--
I am rails developer. But these days i m working on drupal projects (where database is the center of variations). So i want to make sure that team must have a synchronized database. Also the values in various tables.
I need something which keep our data values synchronized.
Centralized database is a good solution. But things get disturbed when someone works offline
if you use visual studio then you can script your database tables, views, stored procedures and functions as .sql files from a database solution and then check those into version control as well - its what i currently do at my workplace
In you dont use visual studio then you can still script your sql as .sql files [but with more work] and then version control them as necessary
Have a look at Red Gate SQL Source Control - http://www.red-gate.com/products/SQL_Source_Control/
To be honest I've never used it, but their other software is fantastic. And if all you want to do is keep the DB schema in sync (rather than full source control) then I have used their SQL Compare product very succesfully in the past.
(ps. I don't work for them!)
You can use Sql Source Control together with Sql Data Compare to source control both: schema and data. Here is an article from redgate: Source controlling data.
These are some of the possibilities.
Using the same database. Set-up a central database where everybody can connect to. This way you are sure everybody uses the same database all the time.
After every change, export the database and commit it to the VCS. This option requires discipline and manual labor.
Use some kind of other definition of the schema. For example, Doctrine for php has the ability to build the database from a yaml definition which can be stored in the vcs. This can be easier automated then point 2.
Use some other software/script which updates the database.
I feel your pain. I had terrible trouble getting SQL Server to play nice with SVN. In the end I opted for a shared database solution. Every day I run an extensive script to backup all our schema definitions (specifically stored procedures) for version control into text files. Due to the limited number of changes this works well.
I now use this technique for our major project and personal projects too. The only negative is that it relies on being connected all the time. The other answers suggest that full database versioning is very time consuming and I tend to agree. For "live" upgrades we use the Red Gate tools, they do both schema and data compare and it works very well.
http://www.red-gate.com/products/SQL_Data_Compare/. We were using this tool for keeping databases in sync in our company. Later we had some specific demands so we had to write our own code for synchronization. Depends how complex is you database and how much changes is happening. It is much simpler if you have time when no one is working and you can lock database for syncronization.
Check out OffScale DataGrove.
This product tracks changes to the entire DB - schema and data. You can tag versions in any point in time, and return to older states of the DB with a simple command. It also allows you to create virtual, separate, copies of the same database so each team member can have his own separate DB. All the virtual copies are tracked into the same repository so it's super-easy to revert your DB to someone else's version (you simply check-out their version, just like you do with your source control). This means all your DBs can always be synchronized.
Regarding a centralized DB - just like you don't want to work on the same source code, you don't want to be working on the same DB. It means you'll constantly break each other's code and builds each time someone changes something in the DB.
I suggest that you go with a separate DB for each developer, and sync them using DataGrove.
Disclaimer - I work at OffScale :-)
Try Wizardby. This is my personal project, but I've used it in my several previous jobs with great deal of success.
Basically, it's a tool which lets you specify all changes to your database schema in a database-independent manner and then apply these changes to all your databases.
Building and maintaining a database that is then deplyed/developed further by many devs is something that goes on in software development all the time. We create a build script, and maintain further update scripts that get applied as the database grows over time. There are many ways to manage this, from manual updates to console apps/build scripts that help automate these processes.
Has anyone who has built/managed these processes moved over to a Source Control solution for database schema management? If so, what have they found the best solution to be? Are there any pitfalls that should be avoided?
Red Gate seems to be a big player in the MSSQL world and their DB source control looks very interesting:
http://www.red-gate.com/products/solutions_for_sql/database_version_control.htm
Although it does not look like it replaces the (default) data* management process, so it only replaces half the change management process from my pov.
(when I'm talking about data, I mean lookup values and that sort of thing, data that needs to be deployed by default or in a DR scenario)
We work in a .Net/MSSQL environment, but I'm sure the premise is the same across all languages.
Similar Questions
One or more of these existing questions might be helpful:
The best way to manage database changes
MySQL database change tracking
SQL Server database change workflow best practices
Verify database changes (version-control)
Transferring changes from a dev DB to a production DB
tracking changes made in database structure
Or a search for Database Change
I look after a data warehouse developed in-house by the bank where I work. This requires constant updating, and we have a team of 2-4 devs working on it.
We are fortunate because there is only the one instance of our "product", so we do not have to cater for deploying to multiple instances which may be at different versions.
We keep a creation script file for each object (table, view, index, stored procedure, trigger) in the database.
We avoid the use of ALTER TABLE whenever possible, preferring to rename a table, create the new one and migrate the data over. This means that we don't have to look through a history of ALTER scripts - we can always see the up to date version of every table by looking at its create script. The migration is performed by a separate migration script - this can be partly auto-generated.
Each time we do a release, we have a script which runs the create scripts / migration scripts in the appropriate order.
FYI: We use Visual SourceSafe (yuck!) for source code control.
I've been looking for a SQL Server source control tool - and came across a lot of premium versions that do the job - using SQL Server Management Studio as a plugin.
LiquiBase is a free one but i never quite got it working for my needs.
There is another free product out there though that works stand along from SSMS and scripts out objects and data to flat file.
These objects can then be pumped into a new SQL Server instance which will then re-create the database objects.
See gitSQL
Maybe you're asking for LiquiBase?
Say I have a website and a database of that website hosted locally on my computer (for development) and another database hosted (for production)...ie first I do the changes on the dev db and then I do the changes to the prod DB.
What is the best way to transfer the changes that I did on the local database to the hosted database?
If it matters, I am using MS Sql Server (2008)
The correct way to do this with Visual Studio and SQL Server is to add a Database Project to the web app solution. The database project should have SQL files that can recreate the entire database completely on a new server along with all the necessary tables, procedures users and roles.
That way, they are included in the source control for all the rest of the code as well.
There is a Changes sub-folder in the Database Project where I put the SQL files that apply any new alterations or additions to the database for subsequent versions.
The SQL in the files should be written with proper "if exists" blocks such that it can be run safely multiple times on an already updated database without error.
As a rule, you should never make your changes directly in the database - instead modify the SQL script in the project and apply it to the database to make sure your source code (the SQL files) is always up to date.
We do this in the (Ruby on) Rails world by writing "migrations," which capture the changes you make to the DB structure at each point. These are run with a migration tool (a task for rake), which also writes to a DB table so it knows whether a particular migration has been run or not.
You could make a structure like this for your dev platform (.Net?), but I think that in other answers to this question people will suggest available tools for handling database versioning in your development platform, or perhaps for your specific DB.
I don't know any of these, but check out this list. I see a lot of pay things out there, but there must be something free. Also check this out.
I migrate changes via change scripts written by developers when they have tested/verified their changes. (The exception being moving large data.) All scripts are stored in a Source control system. and can be verified by DBAs.
It is manual, sometime time consuming but effective, safe and controled process.
Databases are too vital to copy from dev.
There are tools to help create/verify these scripts.
See http://www.red-gate.com/
I have used their tools to compare 2 databases to create scripts.
Brian
If the changes are small, I sometimes make them by hand. For larger changes, I use Red Gate's SQL Compare to generate change scripts. These are hand-verified and run in the QA environment first to make sure they don't break anything. For large changes, we run a special backup prior to making the change both in QA and in production.
We used to use the approach provided by Ron. It makes sense for a big project with dedicated team of DBAs. But if you do not have a dedicated developers who write code only for DB this approach is time and resource expensive.
The approach to use RedGate DB compare is also not good. You still have a do a lot of manual work you can skip some step by mistake.
It needs something better. This is was the reason why we built the "Agile DB Recreation/Import/Reverse/Export tool"
The tool is free.
Advantages: your developers use any prefered tools to develop DEV DB. Then they run the DB RIRE and it makes reverseengeniring DB (tables, views, stor proc, etc) and export data into XML files. XML files you can keep in the any code repository system.
And the second step is to run DB RIRE one more time to generate difference scripts between structure and data in XML files and in Production DB.
Of course you can make as much iterations as you need.
I'm early in development on a web application built in VS2008. I have both a desktop PC (where most of the work gets done) and a laptop (for occasional portability) on which I use AnkhSVN to keep the project code synced. What's the best way to keep my development database (SQL Server Express) synced up as well?
I have a VS database project in SVN containing create scripts which I re-generate when the schema changes. The original idea was to recreate the DB whenever something changed, but it's quickly becoming a pain. Also, I'd lose all the sample rows I entered to make sure data is being displayed properly.
I'm considering putting the .MDF and .LDF files under source control, but I doubt SQL Server Express will handle it gracefully if I do an SVN Update and the files get yanked out from under it, replaced with newer copies. Sticking a couple big binary files into source control doesn't seem like an elegant solution either, even if it is just a throwaway development database. Any suggestions?
There are obviously a number of ways to approach this, so I am going to list a number of links that should provide a better foundation to build on. These are the links that I've referenced in the past when trying to get others on the bandwagon.
Database Projects in Visual Studio .NET
Data Schema - How Changes are to be Implemented
Is Your Database Under Version Control?
Get Your Database Under Version Control
Also look for MSDN Webcast: Visual Studio 2005 Team Edition for Database Professionals (Part 4 of 4): Schema Source and Version Control
However, with all of that said, if you don't think that you are committed enough to implement some type of version control (either manual or semi-automated), then I HIGHLY recommend you check out the following:
Red Gate SQL Compare
Red Gate SQL Data Compare
Holy cow! Talk about making life easy! I had a project get away from me and had multiple people in making schema changes and had to keep multiple environments in sync. It was trivial to point the Red Gate products at two databases and see the differences and then sync them up.
In addition to your database CREATE script, why don't you maintain a default data or sample data script as well?
This is an approach that we've taken for incremental versions of an application we have been maintaining for more than 2 years now, and it works very well. Having a default data script also allows your QA testers to be able to recreate bugs using the data that you also have?
You might also want to take a look at a question I posted some time ago:
Best tool for auto-generating SQL change scripts
You can store backup (.bak file) of you database rather than .MDF & .LDF files.
You can restore your db easily using following script:
use master
go
if exists (select * from master.dbo.sysdatabases where name = 'your_db')
begin
alter database your_db set SINGLE_USER with rollback IMMEDIATE
drop database your_db
end
restore database your_db
from disk = 'path\to\your\bak\file'
with move 'Name of dat file' to 'path\to\mdf\file',
move 'Name of log file' to 'path\to\ldf\file'
go
You can put above mentioned script in text file restore.sql and call it from batch file using following command:
osql -E -i restore.sql
That way you can create script file to automate whole process:
Get latest db backup from SVN
repository or any suitable storage
Restore current db using bak file
We use a combo of, taking backups from higher environments down.
As well as using ApexSql to handle initial setup of schema.
Recently been using Subsonic migrations, as a coded, source controlled, run through CI way to get change scripts in, there is also "tarantino" project developed by headspring out of texas.
Most of these approaches especially the latter, are safe to use on top of most test data. I particularly like the automated last 2 because I can make a change, and next time someone gets latest, they just run the "updater" and they are ushered to latest.