I have what appears to be a strange use case that I have been unable to find an answer too on Google. I have a website with a SQL server backend which grabs a copy of several different databases nightly from our different clients in order to keep the latest information on the web. The website utilizes 14 different stored procedures to get all of the information it needs as a user navigates the site.
Up until now, I have been manually updating these procedures in each customers database to ensure they always have the latest. This is difficult, however, as I do not have great remote connections into each of them and some require coordination with their IT department. My goal instead would be to setup some nightly task or script that would look in a folder and run every .sql file in the folder against every database.
This is where I have been unable to make headway, I know how to use SQLCMD to run a procedure through the task scheduler, but I don't know how to do that while utilizing a dynamic list of databases. I already have a script setup to do the restore operation on each database from a .bat file that is FTPed onto the server, is it possible to use xp_cmdshell to run each file in a folder against a database stored in a variable? Any other suggestions?
You can use Powershell and program as you want. You can do everything you want from powershell script. Just Google what you need with Powershell, you will get the answer. Also, You can use sqlcmd in powershell to run sql queries against any database. Not only that, you can read each file from your folder and run one by one.
Finally, you want to schedule your powershell script to run nightly, one way would be Task Scheduler or node or any custom program or service.
Just google with right keyword, you will get tons of resources.
Hope you have some ideas now to move forward.
Related
How do I achieve repeateable migration of sql scripts to every database? I have a segment called API and this need to be deployed in all the existing databases in sql server.
Though I am able to repeatedly run/execute the set of scripts based on the naming convention, not able to run on every dbs.
As of now, I have a data-system.json file where all the dbs and segments are registered and I am using this to run the particular segment of a single db.
I'm not 100% on what you're asking, but in reference to the first part of your question:
How do I achieve repeateable migration of sql scripts to every database?
If you want to to run your Flyway scripts on multiple databases, you can use the 'migrate' command in the Flyway CLI to do that (https://flywaydb.org/documentation/command/migrate).
You can configure the environment specific info (e.g. login credentials) using environment variables (https://flywaydb.org/documentation/envvars).
Thanks
We are currently using TeamCity and I am wondering if it is possible to have it handle our database process. Here is what I am trying to accomplish.
User runs a build
TeamCity remotes into database server (or tells a program to via command line)
SQL script is run that updates record(s)
Copys the mdf/ldf back to team city for manipulation in the build
Alternatively it could work like this if this is easier
User logs in to database server and runs batch file which does the following:
SQL script is run that updates record(s)
MDF/LDF is copied and then uploaded to repository
Build process is called through web hook with parameter
I cant seem to find anything that even gets me started. Any help getting pointed in the right direction would be helpful.
From your description above I am going to guess you are trying to make a copy of a shared (development) database, which you then want to modify and run tests against on the CI server.
There is nothing to stop you doing what you describe with TeamCity (as it can run any arbitrary code as a build step) but it is obviously a bit clunky and it provides no specific support for what you are trying to do.
Some alternative approaches:
Consider connecting directly to your shared database, but place all your operations within a transaction so you can discard all changes. If your database offers the capability, consider database snapshots.
Deploy a completely new database on the CI when you need one. Automate the schema deployment and populate it will test data. Use a lightweight database such as SQL Local DB, or SQL Lite.
I am currently working on a project, where every user has to have its own local database on his laptop in order to go offline and work on the road. Whenever he comes back to the office, he plugs in his network cable and syncs the database manually with the master database.
I am looking for an easy "1-click-solution" with already presetted sync configurations.
So for example, I'd like to sync all data in my tables except some certain ones. Though, all stored procedures need to be synced.
It needs to be executed by arguments, so I am able to make it run kinda as a script e.g. everytime the user logs in.
PS: I heard that the red gate tools are pretty good. What do you guys think of it?
You might want to try using the SQL Compare and SQL Data Compare command lines to achieve this.
In your SQL Data Compare project file you can select specific tables and save this in a project file, and reference this from the command line. Or you could use the /include and /exclude switches.
Here is documentation to the SQL Compare and SQL Data Compare switches.
You can post questions on our forum if you need any assistance.
We've got a SQL Server instance with some 15-20 databases, which we check in TFS with the help of RedGate. I'm working on a script to be able to replicate the instance (so a developer could run a local instance when needed, for example) with the help of these scripts. What I'm worried about is the dependencies between these scripts.
In TFS, RedGate has created these folders with .sql files for each database:
Functions
Security
Stored Procedures
Tables
Triggers
Types
Views
I did a quick test with Powershell, just looping over these folders to execute the sql, but I think that might not always work. Is there a strict ordering which I can follow? Or is there some simpler way to do this? To clarify, I want to be able to start with an completly empty SQL Server instance, and end up with a fully configured one according to what is in the TFS (without data, but that is ok). Using Powershell is not a requirement, so if it is simpler to do some other way, that is preferrable.
If you're already using RedGate they have a ton of articles on how to move changes from source control to database. Here's one which describes moving database code from TFS using sqcompare command-line:
http://www.codeproject.com/Articles/168595/Continuous-Integration-for-Database-Development
If you compare to any empty database it will create the script you are looking for.
The only reliable way to deploy the database from scripts folders would be to use Red Gate SQL Compare. If you run the .sql files using PowerShell, the objects may not be created in the right order. Even if you run them in an order that makes sense (functions, then tables, then views...), you still may have dependency issues.
SQL Compare reads all of the scripts and uses them to construct a "virtual" database in memory, then it calculates a dependency matrix for it so when the deployment script is created, things are done in the correct order. That will prevent SQL Server from throwing dependency-related errors.
If you are using Visual Studio with the database option it includes a Schema Compare that will allow you to compare what is in the database project in TFS to the local instance. It will create a script for you to have those objects created in the local instance. I have not tried doing this for a complete instance.
You might have to at most create the databases in the local instance and then let Visual Studio see that the tables and other objects are not there.
You could also just take the last backup of each database and let the developer restore them to their local instance. However this can vary on each environment depending on security policy and what type of data is in the database.
I tend to just use PowerShell to build the scripts for me. I have more control over what is scripted out when so when I rerun the scripts on the local instance I can do it in the order it needs to be done in. May take a little more time but I get better functioning scripts for me to work with, and PS is just my preference. There are some good scripts already written in the SQL Community that can help you on this. Jen McCown did a blog post of all the post her husband has written for doing just this, right here.
I've blogged about how to build a database from a set of .sql files using the SQL Compare command line.
http://geekswithblogs.net/SQLDev/archive/2012/04/16/how-to-build-a-database-from-source-control.aspx
The post is more from the point of view of setting up continuous integration, but the principles are the same.
I need to restore a backup from a production database and then automatically reapply SQL scripts (e.g. ALTER TABLE, INSERT, etc) to bring that db schema back to what was under development.
There will be lots of scripts, from a handful of different developers. They won't all be in the same directory.
My current plan is to list the scripts with the full filesystem path in table in a psuedo-system database. Then create a stored procedure in this database which will first run RESTORE DATABASE and then run a cursor over the list of scripts, creating a command string for SQLCMD for each script, and then executing that SQLCMD string for each script using xp_cmdshell.
The sequence of cursor->sqlstring->xp_cmdshell->sqlcmd feels clumsy to me. Also, it requires turning on xp_cmdshell.
I can't be the only one who has done something like this. Is there a cleaner way to run a set of scripts that are scattered around the filesystem on the server? Especially, a way that doesn't require xp_cmdshell?
First off and A-number-one, collect all the database scripts in one central location. Some form of Source Control or Version Control is best, as you can then see who modified what when and (using diff tools if nothing else) why. Leaving the code used to create your databases hither and yon about your network could be a recipe for disaster.
Second off, you need to run scripts against your database. That means you need someone or something to run them, which means executing code. If you're performing this code execution from within SQL Server, you pretty much are going to end up using xp_cmdshell. The alternative? Use something else that can run scripts against databases.
My current solution to this kind of problem is to store the scripts in text (.sql) files, store the files in source control, and keep careful track of the order in which they are to be executed (for example, CREATE TABLEs get run before ALTER TABLEs that add subsequent columns). I then have a batch file--yeah, I've been around for a while, you could do this in most any language--to call SQLCMD (we're on SQL 2005, I used to use osql) and run these scripts against the necessary database(s).
If you don't want to try and "roll your own", there may be more formal tools out there to help manage this process.
Beyond the suggestions about centralization and source control made by Phillip Kelley, if you are familiar with .NET, you might consider writing a small WinForms or WebForms app that uses the SQL Server SMO (SQL Server Management Objects). With it, you can pass an entire script to the database just as if you had droppped it into Management Studio. That avoids the need for xp_cmdshell and sqlcmd. Another option would be to create a DTS/SSIS package that would read the files and use the Execute T-SQL task in a loop.