I know I could write scripts and create jobs to run them, but at least some of what I'm wanting it to do is beyond my programming abilities for that to be an option.
What I'm imagining is something that can run on a regular schedule that will examine all the databases on a server and automatically shrink data and log files (after a backup, of course) when they've reached a file size that contains too much free space. It would be nice if it could defrag index files when they've become too fragmented as well.
I guess what I'm probably looking for is a DBA in a box!
Or it could just be that I need better performance monitoring tools instead. I know how to take care of both of those issues, but it's more that I forget to check for those issues until I start seeing performance issues with my apps.
If you are using SQL Server 2005. Fire up the Management Studio and look at the Maintenance Plan section.
See http://msdn.microsoft.com/en-us/library/ms187658.aspx for an overview and http://msdn.microsoft.com/en-us/library/ms189036.aspx for details on the Maintenance plan wizard.
Finally, http://msdn.microsoft.com/en-us/library/ms140255.aspx is a list of all the maintenance tasks available.
I am pretty sure this is all available even in the Express Edition. I can't speak to if anything has changed in 2008, I haven't used it yet.
That stuff is all built in, it is called a maintenance plan
yeah everything you described (except maybe perf monitoring) can be done with database maintenance plans, back ups, shrinking log files etc.
I guess the tool I was looking for was under my nose the whole time! I've used Maintenance Plans for backups but I think I set those up at least 4 years ago or more, long before I knew anything about shrinking files and defragging indexes. Thanks!
Related
We have some large schema changes coming down the pipe and are in needs of some tips in writing upgrade scripts manually. We're using SQL Server 2000 and do not have access to automated tools nor are they an option at this point in time. The only database tool we have is SQL Server Management Studio.
You can import the database to a local machine with has a newer version of SQL, then you can use the 'Generate Scripts' feature to script out a lot of the database objects.
Make sure to set in the Advanced Settings to script for SQL Server 2000.
If you are having problems with the script generated, you can try breaking it up into chunks and run it in small batches. That way if you have any specific generated scripts you can just write the SQL manually to get it to run.
While not quite what you had in mind, you can use Schema comparing tools like SQL Compare, and then just script the changes to a sql file, which you can then edit by hand before running it. I guess that would be as close to writing it manually without writing it manually.
If you -need- to write it all manually i would suggest getting some intellisense-type of tools to speed things up.
Your upgrade strategy is probably going to be somewhat customized for your deployment scenario, but here are a few points that might help.
You're going to want to test early and often (not that you wouldn't do this anyway), so be sure to have a testing DB in your initial schema, with a backup so you can revert back to "start" and test your upgrade any number of times.
Backups & restores can be time-consuming, so it might be helpful to have a DB with no data rows (schema-only) to test your upgrade script. Remember to get a "start" backup so you can go back there on-demand.
Consider stringing a series of scripts together - you can use one per build, or feature, or whatever. This way, once you've got part of the script working, you can leave it alone.
Big data migration can get tricky. If you're doing data transformations, copying or moving rows to new tables, etc., be sure to check row counts before the move and account for all rows afterwards.
Plan for failure. If something goes wrong, have a plan to fix it -- whether that's rolling everything back to a backup taken at the beginning of the deployment, or whatever. Just be sure you've got a plan and you understand where your go / no-go points are.
Good luck!
I've created a software that is supposed to synchronize data between two databases in SQL Server. The program is tested as much as I was able to do so while having a limited amount of data and limited time. Now I need to make it run and I want to play that safe.
What would be the best approach to be able to recover if something goes wrong and database gets corrupted? (meaning not usable by the original program)
I know I can backup both databases each time I perform the sync. I also know that I could do point in time recovery.
Are there any other options? Is it possible to rollback only the changes made by the sync service? (both databases are going to be used by other software)
You probably have, but I suggest investigating the backup and recovery options available in SQL Server. Since you have no spec, you don't know how the system is going to behave against these changes, leaving you with a higher likelihood of problems. For this reason (and many other obvious reasons) I would want to have solid SQL backups/recovery process in place. Unfortunately Express isn't very good in automating this area, but you can run them manually before the sync.
At the very least, make everything transactional; a failure in your program should not leave the databases in a partially sync'd state.
Too bad you don't have a full version of SQL Server... then you might be able to use something like replication services and eliminate this program altogether? ;)
I'm in a situation where I came into a new job and I have to support several legacy systems. The original developer is no longer around. These legacy systems are really hammering away at our SQL Server and killing performance. I know that there are a lot of things that can be done in the code, but rewriting code is really my last resort.
What I'm looking for is some sort of tool that will monitor the queries coming into the server and give recommendations on indexing solutions. I know I can use the SQL Server Profiler but I'm looking for something a little more user friendly and something that can help me make the indexing decisions.
I know I didn't explain it very well, but I'm sure this is a common request. I'd like to make informed decisions on what to index and avoid "shooting from the hip" and indexing everything in sight. Thanks for any recommendations!
You don't need a third party tool for this.
Assuming SQL Server 2005+ as long as you can use SQL Profiler (actually SQL Trace - Don't use the Profiler GUI for this to reduce tracing overhead as much as possible) to collect a representative workload you can use the Database Tuning Advisor to automate analysis of the workload and make indexing recommendations.
You can also use the Missing Index DMVs for a quick overview of areas to investigate but the DTA will do more holistic analysis and take into account possible adverse effects of indexes on data modification statements.
+1 for Martin's answer, but since you asked about 3rd party tools, I'll mention one of my favorites (and no, I don't work for the company). Ignite for SQL Server does an excellent job of analyzing server activity in terms of wait time analysis. It won't make recommendations for you, but it will quickly identify the worst performing queries where you need to focus your effort.
SQL Server 2005+ has a lot of DMV's (Dynamic Management views) that you can query to get server info, as well as the Profiler / SQL Trace tool.
We administer several large database servers.
Idera is a good tool to manage multiple database servers easily.
I think you'd make a much better DBA if you learn more about the inbuilt functionality of SQL server.
Have a browse of
http://msdn.microsoft.com/en-us/library/ms188754.aspx
to find out more about DMV's and functions.
Another common issue with performance could be your indexes.
Theres a great tutorial that combines the DMV's with improving indexes here:
http://searchsqlserver.techtarget.com/tip/Using-dynamic-management-views-to-improve-SQL-Server-index-effectiveness
Idera is really worth checking out though as a good starting point. Combined with DMV's & SQL trace there shouldn't be much you won't be able to fix.
Idera just takes most of the leg work out of doing things.
http://www.idera.com/Content/Home.aspx
Idera: SQL Diagnostic Manager
Today we're using a shared SQL Server database and that is perfect as I don't know anything about SQL Server maintenance. But for economic reasons we're need to upgrade to a dedicated server.
Given that I don't have time to read the entire documentation, What do I absolutely need to know about SQL Server to not screw this up?
Resource suggestions appreciated!
The answer probably has to do with how data-intensive your application is. If it's like most business applications, you're probably OK reading a couple quick start guides and winging it (as long as you back up regularly ... that's important, so read up on that carefully). SQL Server is generally pretty self-tuning, and if you're not talking millions of rows and high TPS, you're probably fine for a little while.
If it is a data-intensive application, or has high availability or throughput needs ... get a DBA, even just on contract. Don't put all your eggs in a basket you can't carry.
Backing up!
Oh, it's the accidental DBA!
Brent Ozar has a handful of useful articles: http://www.brentozar.com/sql/
Don't forget about SQLServerPedia - http://sqlserverpedia.com/wiki/Main_Page
Cheers!
In terms of backing up, don't forget to backup the transaction log as well as the database, unless you'd like your transaction log to grow until it takes over the entire drive.
I'd also read up on indexing, and statistics and rebuilding each.
Also you should probably get a good understanding of how database security works.
If at all possible get a dev server as well as a prod one. Much much better to test changes on dev than directly in production! Then limit prod access to only a couple of people and make all changes to production happen through tested scripts.
In order of importance:
How to schedule backups
How to create indexes
How to rebuild indexes
The Profiler and Tuning wizard can help you with 2 and 3.
If you are programming the database and not just administering it, I'd reccommend Robert Vieira's book. It's a great introduction.
The backup and restore process of a large database or collection of databases on sql server is very important for disaster & recovery purposes. However, I have not found a robust solution that will guarantee the whole process is as efficient as possible, 100% reliable and easily maintainable and configurable accross multiple servers.
Microsft's Maintenance Plans doesn't seem to be sufficient. The best solution I have used is one that I created manually using many jobs with many steps per database running on the source server (backup) and destination server (restore). The jobs use stored procedures to do the backup, copying & restoring. This runs once a day (full backup/restore) and intraday every 5 mins (transaction log shipping).
Although my current process works and reports any job failures via email, I know the whole process isn't very reliable and cannot be easily maintained/configured on all our servers by a non-DBA without having in-depth knowledge of the process.
I would like to know if others have this same backup/restore process and how others overcome this issue.
I've used a similar step to keep dev/test/QA databases 'zero-stepped' on a nightly basis for developers and QA folks to use.
Documentation is the key - if you want to remove what Scott Hanselman calls 'bus factor' (i.e. the danger that the creator of the system will get hit by a bus and everything starts to suck).
That said, for normal database backups and disaster recovery plans, I've found that SQL Server Maintenance Plans work out pretty well. As long as you include:
1) Decent documentation
2) Routine testing.
I've outlined some of the ways to go about doing that (for anyone drawn to this question looking for an example of how to go about creating a disaster recovery plan):
SQL Server Backup Best Practices (Free Tutorial/Video)
The key part of your question is the ability for the backup solution to be managed by a non-DBA. Any native SQL Server answer like backup scripts isn't going to meet that need, because backup scripts require T-SQL knowledge.
Because of that, you want to look toward third-party solutions like the ones Mitch Wheat mentioned. I work for Quest (the makers of LiteSpeed) so of course I'm partial to that one - it's easy to show to non-DBAs. Before I left my last company, I had a ten minute session to show the sysadmins and developers how the LiteSpeed console worked, and that was that. They haven't called since.
Another approach is using the same backup software that the rest of your shop uses. TSM, Veritas, Backup Exec and Microsoft DPM all have SQL Server agents that let your Windows admins manage the backup process with varying degrees of ease-of-use. If you really want a non-DBA to manage it, this is probably the most dead-easy way to do it, although you sacrifice a lot of performance that the SQL-specific backup tools give you.
I am doing precisely the same thing and have various issues semi regularly even with this process.
How do you handle the spacing between copying the file from Server A to Server B and restoring the transactional backup on Server B.
Every once in a while the transaction backup is larger than normal and takes a longer time to copy. The restore job then gets an operating system error that the file is in use.
This is not such a big deal since the file is automatically applied the next time around however it would be nicer to have a more elegant solution in general and one that specifically fixes this issue.