I am using SourceTree with a Mercurial BitBucket repository. I would like for any SQL script files (*.sql) pulled from my remote BitBucket repo into my local one to simply be executed immediately after I update my working copy with the pulled files. The DB connection info would always be the same.
What would be the simplest way to accomplish this with either SourceTree itself or perhaps a mercurial hook? In the latter case, I believe something could be done with an update hook, but I'm not exactly sure how to set one up.
I am running SQL server 2012 on a Windows 7 machine.
Related
I have a Windows server with MS SQL Server running on it.
On the SQL Server developers have created stored procedures, views, tables, triggers.
On the Windows server developers created shell scripts.
I would like to start versioning the code described above in a BitBucket repository. I have a repository created in BitBucket.
How should the branches be organized in this repository? i.e. "SQL Server\Database\\ ...
"Windows Server\\shell_script" ...
Can I connect BitBucket to SQL Server and Windows Server and specify which code needs to be versioned?
Are both 1 and 2 options above possible?
I just need to version control the changes to the code and have the ability to mark under which project the code change was made.
I am new to BitBucket. I am using the web front end of it. I do not know how to configure command line access, so please try not to reference Bitbucket commands. Sorry if I sound confusing.
Please help.
I know this is an old question but anyway, in principle I'd recommend:
Put all the server shell scripts into one place and make that a git repo linked to your bitbucket repo
Add a server shell script to export what you want version controlled from the SQL db
The export from the SQL db should be to text files so they are easily 'diffable'
You might as well make the export to a sub-directory within the shell scripts repo so that everything is in one place and can't get out of sync
So you only have one branch, not a separate one for server shell scripts and db
Make sure people run the export script and then commit everything when they make a change
You ideally have a test server which means you'd want a way to push changes from the repo into the SQL db. I presume you can do this with a script but deleting the server setup and re-creating it from the text files.
So basically, you can't connect an SQL db to bitbucket directly. You need scripts to read and write to the db from a repo.
I want to use database project for script deployment in Azure SQL Server, I don't want to import full database. I just want to use database project for delta script. I added a project and included one script file with none as build action that contains create table statement , I am publishing the project, It's completing successfully but create statement is not executing. What is wrong here? Is there any other way to do this?
TLDR: Set your build action to "Post Deployment Script".
Longer:
What happens in SSDT is that all the files that have a build action of "Build" are built into a model of what the database should look like. When the deploy happens that model is compared to the target database and if there are any changes, a change script it generated and then optionally deployed.
If you have any file marked pre or post deployment script then they are either prepended or appended to the change script and will be run as part of the deployment.
If you have any files with a build action of "None" then SSDT ignores them, you could put anything in there, even an ascii picture of a donkey and the project will still build and deploy (obviously your ascii donkey won't get deployed anywhere).
If you just want to use SSDT to do your deployments you can just set the build action to pre or post deploy and it will be included. This is pretty odd though, either don't use SSDT or use SSDT and put the model of your entire database in there.
Personally, I would use SSDT properly and live the dream.
Ed
I've already an offline updater as a download but I want to make it automatically. It's a client/server software and a ms sql database.
3 parts have to be updated:
update of a proprietary software (setup.exe with next, next, type login/password for server service, finish) which delivers a WinClient and a Service for the server. Autoupdate is only needed for server, the autoupdate for clients by the server works already.
one .hug file which contains new customized program version. The path is a directory in the above software and can be determined with registry.
MS SQL database update with new content (in a .bak file) and structural changes. The database server is sometimes another server as application server. At the moment you have to type in the SQL instance when running the udpater.
My ideas are these:
a updater program in the tray on application server and database server
updater searches with a timer for newer versions on my ftp server and downloads the files if existing
on application server it runs the setup.exe and copies the .hug file. Maybe I could make it silent with command prompt commands. But still need to fill in login/password for the service user.
on database server I configure the autoupdater once with instance name. Updater downloads .bak-file and runs sql script with insert, update and alter statements.
but one big problem which can cause problems is when users are still logged in. Don't know in the moment how I could solve that.
So, how would you realise such a project? Or have you already done something similiar? Is its generally possible?
What is the best approach to execute a package within another package?
1.From SQL Server?
In this case, I have to deploy the child package everytime the master package is executed
2. From file
In this case, I am forced to deploy all packages as files (not to SQL Server). Then local package path will not be the same with the package path from the server...
I prefer using from file.
This allows me to use source control for a way to deploy the files. Also in SQL 2012 and higher you can actually do DIFFs on SSIS Package Files.
If you want to try and keep the path the same, maybe you could try a mapped directory on your localhost. That way you could for example create a E: drive that maps to a location on C. This will allow you to keep in sync between local and server locations.
OK, I'm warming up to Git and DropBox for version control. I'm creating DNN sites and I'm in the process of using Git/DropBox.
I would also like to use Git on the SQL Server backing database.
Is there some sort of best practice that could be employed here?
I'm currently getting an access denied error when I attempt to create a repository in the SQL Server DATA directory.
You probably don't have permissions to make a .git folder in there. I would use the sql server tools to create the backup files elsewhere. I would then back those up. You should have no problem putting those in a git repo.
Hope this helps.
I set my database file location to a custom directory, then stopped the database service and added read/full rights for all users in the system to the file. After that, git had no problem adding the file to version control.