Execute .sql file from file system through SQL Server Agent - sql-server

Is there a way to execute .sql scripts that are on my hard drive through SQL Server Agent without having to use xp_cmdshell or sql_cmd? Using SQL Server 2008.
I am looking for a simple solution like:
include 'c:\mysql\test.sql'
Thanks!

Maybe you should consider using a Windows scheduled task instead of SQL Server Agent. No matter how you read files off disk, it's going to come with the same type of restrictions preventing you from using xp_cmdshell or sqlcmd (assuming the reason isn't just fear).

Within modern versions of SSMS you get to pick .sql files and import them directly into the step (effectively creating a copy that will be run instead).
As Aaron says, better to keep the code safely in the server & backed up.
You can always keep a script which creates the job & steps in a file, so that there is a one-shot creation (or in case you need to deploy across multiple servers).
If editing steps in the agent dialog is a frustration, then perhaps this is a different alternative. Unless you make efforts to preserve it, you will lose history on if you simply recreate the job each time though.
if it's just that the script is frequently amended with changes that need to be tested then you could look at putting the xp_cmdshell/sqlcmd into the job, although it makes it much more fragile wrt file locations, access rights & potentially makes your error handling a bit more work.
You will need to check filesystem access is enabled - many sysadmins prefer this to be disabled, due to the risks of something uncontrolled being run.
So don't just assume it will work on servers that aren't yours!

Related

execute powershell script stored in sql server table

Is there any way we can run a powershell script stored inside a sql server table from a Sql Server stored procedure?
There's the built-in sproc xp_cmdshell that can be used to do some...extremely hacky things. It can not only be used to issue command-line statements from within T-SQL, but also be used in conjunction with bcp to save the results of a select query to a file, as described in this article.
So yes, what you're asking is theoretically possible. You can construct a T-SQL statement to save the powershell script to a file, then call xp_cmdshell a second time to execute the file you've just saved. (Caveat: I've successfully done each of these individually for a couple of projects, but never combined the two. Your mileage may vary.)
Whether you should actually do this, though, is another matter. There are two things to consider:
Most developers will consider the use of this (admittedly rather convoluted) logic within a stored procedure a nasty hack, as it is difficult to debug/maintain. How does the caller determine if the script completed or not? How does the caller track that the command was correctly issued at all? How does the next developer maintain this when things are breaking?
There are security considerations as well. Obviously the file will only be saved, and its script will only execute, if the xp_cmdshell calls are issued by a user with enough rights to do so. Opening these rights up could potentially open some security holes....and resolving issues with rights not being adequate could be equally challenging.

Clarification on proper practices for backing up SQL Server databases?

Recently I found myself needing to back up a client database running on SQL Server 2008 R2.
Normally I would do this by selecting the "Backup" task in SQL Server Management Studio, which produces a single, portable archive of the database. However, a co-worker said that some customers standard backup practice is to simply create a copy of the .MDF and .LOG files instead. Another then spoke up recommending against such methods, stating that the only 'proper' procedure for this is using the Backup task to produce a file as described above, before backing that up instead. In the event of a problem, restoring this backup file and applying transaction logs allows you to restore service without losing data.
I agree with the recommendation of using the provided task in combination with transaction logs, but I wasn't entirely sure of myself so I kept my mouth shut. Let's be kind and assume that the .MDF/.LOG files were copied when SQL Server itself had been shut down cleanly and completely - is there actually a good reason to use the Backup task instead of copying the raw files, or are we mistaken?
I did some reading on MSDN and found that copying the files and restoring them can result in issues in some cases (see Limitations when using Xcopy deployment), but is that the only difference?
Using the backup task will reset the log files whilst simply copying them will not: this helps with database performance.
Always use the backup task: this can also be scheduled.

Tips for manually writing SQL Server upgrade scripts

We have some large schema changes coming down the pipe and are in needs of some tips in writing upgrade scripts manually. We're using SQL Server 2000 and do not have access to automated tools nor are they an option at this point in time. The only database tool we have is SQL Server Management Studio.
You can import the database to a local machine with has a newer version of SQL, then you can use the 'Generate Scripts' feature to script out a lot of the database objects.
Make sure to set in the Advanced Settings to script for SQL Server 2000.
If you are having problems with the script generated, you can try breaking it up into chunks and run it in small batches. That way if you have any specific generated scripts you can just write the SQL manually to get it to run.
While not quite what you had in mind, you can use Schema comparing tools like SQL Compare, and then just script the changes to a sql file, which you can then edit by hand before running it. I guess that would be as close to writing it manually without writing it manually.
If you -need- to write it all manually i would suggest getting some intellisense-type of tools to speed things up.
Your upgrade strategy is probably going to be somewhat customized for your deployment scenario, but here are a few points that might help.
You're going to want to test early and often (not that you wouldn't do this anyway), so be sure to have a testing DB in your initial schema, with a backup so you can revert back to "start" and test your upgrade any number of times.
Backups & restores can be time-consuming, so it might be helpful to have a DB with no data rows (schema-only) to test your upgrade script. Remember to get a "start" backup so you can go back there on-demand.
Consider stringing a series of scripts together - you can use one per build, or feature, or whatever. This way, once you've got part of the script working, you can leave it alone.
Big data migration can get tricky. If you're doing data transformations, copying or moving rows to new tables, etc., be sure to check row counts before the move and account for all rows afterwards.
Plan for failure. If something goes wrong, have a plan to fix it -- whether that's rolling everything back to a backup taken at the beginning of the deployment, or whatever. Just be sure you've got a plan and you understand where your go / no-go points are.
Good luck!

Disable SQL Server replication via command line or batch file, then re-enable

We are using a continuous integration process, and one of the steps for that is to synchronize the databases. For that, we've selected RedGate software that will analyze two databases and generate the necessary scripts. However, we have SQL replication running on these databases and therefore many of the scripts are prohibited by SQL Server due to the replication.
Is there a way we can temporarily disable/pause replication so we can run the transformation scripts, and then enable replication again after the script has been executed? Or, if anyone has an alternative suggestion, we're all ears!
Look at what the scripts that Red Gate is producing are doing. Often times, they do stuff because it makes the script less likely to fail in the general case, whereas that protection might not be necessary in your environment.
However, if everything in the Red Gate script must stay, your only option is to remove the article, do your change, and then re-add it. sp_droparticle and sp_addarticle are your friend here.

Nightly importable or attachable copies of production database

We would like to be able to nightly make a copy/backup/snapshot of a production database so that we can import it in the dev environment.
We don't want to log ship to the dev environment because it needs to be something we can reset whenever we like to the last taken copy of the production database.
We need to be able to clear certain logging and/or otherwise useless or heavy tables that would just bloat the copy.
We prefer the attach/detach method as opposed to something like sql server publishing wizard because of how much faster an attach is than an import.
I should mention we only have SQL Server Standard, so some features won't be available.
What's the best way to do this?
MSDN
I'd say use those procedures inside a SQL Agent job (use master.xp_cmdshell to perform the copy).
You might want to put the big huge tables on their own partition and have this partition belong to a different file group. You would backup then backup and restore the main file group.
You might want to also consider doing incremental backups. Say, a full backup every weekend and an incremental every night. I haven't done file group backups, so I don't know if these work well together.
I'm guessing that you are already doing regular backups of your production database? If you aren't, stop reading this reply and go set it up right now.
I'd recommend that you write a script that automatically runs, say once a day, that:
Drops your current test database.
Restores your current production backup to your test environment.
You can write a simple script to do this and execute it using the isql.exe command line tool.

Resources