(Apologies...I'm new to SQL Server...)
We have SQL Server 2012 Express installed as a default instance which is the data store for our scheduler (JAMS).
I have a small data entry project where SQL Server will be the back end. It will have a small number of users with minor traffic/data entry/edits throughout the day. The data volumes will be small: one table with 300K rows, 3 more with around 5K rows.
I'm wondering whether I should convert the default instance into two named instances to segregate the applications? Or perhaps it's not worth the bother? IIRC, named instances run separate services for each instance. So, I could say restart the data entry instance without affecting the scheduler instance.
High availability isn't really needed, but if I adversely affect the scheduler it wouldn't be good.
Your thoughts? Thanks...
Named instances are required only when you want to run multiple instances. If you choose to run a single instance it doesn't really matter if it is named or default, although I would personally run as the default instance to avoid the need to specify the instance name.
The big advantage of a single instance is that it uses memory resources more efficiently (e.g. shared buffer pool memory). The downside, as you mentioned, is lack of isolation. Separate databases will help mitigate this concern but you will still be locked into the version supported by the vendor. For example, you can't use SQL 2014 until JAMS supports it. For this specific reason, I suggest you avoid sharing user and vendor databases on the same instance unless you are willing to be tied to the SQL Server versions supported by the vendor. Based on your description, I don't think resource utilization will be a concern so multiple instances is probably the best option if you want flexibility.
Related
How can I monitor which data is being accessed and which frequency?
I'm in need to migrate several (very) small SQL Server instances, each which several small databases. Current configuration is based in a lot of also small servers with local storage. New configuration is based in a single server with a single NAS.
So far, the SQL Server memory and CPU sizing is OK. Also DB sizes and total IOPS. But there's no existing documentation of what data set is actually being accessed. So, basically, I don't have a clue about what are the real storage requirements since the total amount of IOPS may be for only a couple of tables (so it would work like a charm with just a couple of SSD) or if the whole set of databases are being scanned all the time and I'll need several dozens of disks.
So, back to the question: How can I "profile" and get statistics of what data is being accessed? Either at SQL or Windows level?
The best way to see how much a table or groups of tables are being used is to use SQL Server Audit. It has very little impact on SQL Server's performance and can be easily set up to monitor selects (unlike triggers) in addition to inserts/updates/deletes.
Apologies for the noob question, I've never dealt with failover before.
Currently we have a single hardware server running Windows Server, SQL Server, ASP.NET and a single (very large) web application. We are considering migrating this to an Azure VM.
I see in the SLA that Microsoft will only guarantee 99.95% availability if I am running more than one instance of an Azure VM, to allow for failure and reboots etc.
Does this mean I therefore would have two servers to manage and maintain? For example, two versions of SQL with a database on each, and two sets of ASP.NET application files? If correct, this puts the price up dramatically.
I assume there is no way to 'mirror' one server across to the other to reduce this workload?
Also, our hardware server has 25,000 uploaded files on it. Would we need to put these on a VHD then 'link' them to whichever live server was running, or does Azure do this automatically? Or do they have to be mirrored from the live server to the failover server?
Any pointers would be appreciated. I've already read all the Azure documentation but it hasn't really made things much clearer...
Seems like you have multiple topics you should look after.
Let's start with the database. The easiest thing would be, if you could migrate your sql server into the sql azure one. Than you would not have no need to maintain it and to maintain the machines you should use.
This would you give the advantage, that you central component can be used by 1 to many applications.
Second one are you uploaded files. I assume that your application allows to upload files for sharing or something else. The best thing would be, if you could just write these files into the windows azure blobstorage. Often this means you have to rewrite a connector, but this would centralize another component.
For the first step you could make them available and clients can download it with the help of a link. If not you could load the files from their and deliver them to the customer.
If you don't want to rewrite your component, you should have to use the VHD. One VHD can only have one lease. So only one instance can be used. A common way I have seen is that if the application is starting, it is trying to "recover" the lease. (try-and-error like)
Last but not least your ASP.NET application. If you have such an application I would have a look into cloud instances. Try not to consider the VMs, because than you have to do all the management. VMs are the IaaS. With a .NET application should easily be able to convert it and deploy instances.
Than you have not to think about failover and so on. Just deploy 2 instances and the load-balancer will do the rest.
If you are able to "outsource" the SQL server, you could minimize your machine for the ASP.net application. Try to use scale-out and not scale-up. This means use more smaller nodes, than one big one. (if possible)
If you are really going the VM way, you have to manage all the stuff by yourself and yes than you need 2 vms. You are also need 3 vms, because you have no auto-loadbalancer and if you only have 2 just one machine can have the port 80 exported.
HTH
Can one install Microsoft's SQL Server Agent, without a database instance?
(The aim is to reduce traffic to the database. I would like to put the server agent to an other server.)
Thanks for the help.
SQL Server Agent needs to store it's data (jobs etc) in the MSDB database. So you will need one.
If your SQL Agent really is causing such a massive amount of I/O that it's use of MSDB is slowing down overall server performance (which I doubt is the case, but everything is possible if you try hard enough), then you have the option to move the MSDB data files to a separate disk. (Not just a separate logical drive on the same disk, but a separate disk entirely).
I expect there are many other performance improvements possible before you get to that point though. Moving TempDB to another physical disk would be a good start point.
My question is similar. How can I make my "instance" the default instance, so I do not have to qualify my sql server with the instance name. Although, this is not the exact same question, I was brought here looking for an answer and others might be as well. This solution allows you to still have multiple instances, yet define the default instance.
To do this you need to change the TCPIP dynamic port of your instance to the default 1433 port value. Details on this simple procedure using the SQL Server Configuration Manager can be found here:
https://kohera.be/blog/sql-server/make-named-instance-look-like-default-instance/
Just a question about best-practices when upgrading an existing database. Assuming there will be all kinds of modifications to the data itself, the structure, the relations, additional columns, disappearing columns and whatever more.
My problem is a simple one. I'm working on a project that will use SQL Server. No problem there, since I'm enough of an expert to handle this. But this project will be upgraded later on and I need to specify a protocol that needs to be followed by the upgrade mechanism. Basically, this protocol needs to be followed when creating upgrade scripts...
Right now, I have these simple steps:
Add the new columns to the tables.
Add constraints to the new columns.
Add new tables.
Drop constraints where needed.
Drop columns that need to be removed.
Drop tables that need to be removed.
Somehow, this list feels incomplete. Is there a more extended list somewhere describing the proper steps which needs to be followed during an upgrade?
Also, is it always possible to do a complete upgrade within a single database transaction (with SQL Server) or are there breakpoints that need to be included within the protocol where one transaction should end and another one starts?While automated tools will provide a nice, automated solution, I still can't really use them. The development team working on this system has 4 developers, each with their own database on their local system. Every developer keeps track of their own updates to the structure and keeps track of them by generating both an Upgrade and Downgrade script for his own modifications, both for structural changes and data changes. These scripts can then be used by the other developers to keep their own system up-to-date. Whenever the system is going to be released, those scripts are all merged into one big script.
The system does not include any stored procedures or other "special" features. The database is just that: a data storage with just tables and relations between them. No roles, no users, no stored procedures, no triggers, no complex datatypes...The DB is used by an application where users work from 9-to-5 so shutting down can be done easily, including upgrades for the clients. We also add a version number to the database and applications will check if they're linked to the correct database version.
During development, all developers use their own database instance, which they can fully control. Since we're not the ones who use the application, we tend to develop for the Express edition, not any more expensive one. To be honest, we don't develop our application to support a lot of users, but we'll inform our users that since it uses SQL Server, they could install the system on a bigger SQL Server platform, according to their own needs. They will need their own DBA for this, though. We do have a bigger SQL Server available for ourselves, which we also use for our own web interface, but this server is located in a special dataserver where it is being maintained for us, not by us.
The project previously used MS Access for it's data storage and was intended for single-user development, but as it turned out, many users still decided to share their databases and this had shown that the datamodel itself is reliable enough for multi-user environments. So we migrated to SQL Server to support smaller offices with 3 or more users and some big organisation who will have 500 or more users at the same time.
Since we need to keep the cost of the software low, we don't have a big budget to spend on expensive tools or a more expensive server.
Check out Red-Gate's SQL Compare (structure comparison), SQL Data Compare (data comparison), and SQL Packager (for packaging up updates scripts into a C# project or a .NET executable).
They provide a nice, clean, fully functional and easy-to-use solution for all your database upgrade needs. They're well worth their license fees - that pays for itself in a few weeks or months.
Highly recommended!
In my opinion, it's an absolute bear doing these manually. For Microsoft SQL Server, I'd recommend using the Database editiion of Team System, since it includes complete source control capabilities for your database, and can automatically build your scripts for upgrading/downgrading versions.
Another option is SQLCompare with Redgate, which can also handle these kinds of upgrades/downgrades, and will result in a very nice SQL script. I've used both, and keeping the historic scripts has helped us troubleshoot issues and resolve many a mystery.
If you are working with a manual script as above, don't forget to also account for SP changes in your scripts. Also, any hand-edited script should be able to be executed multiple times on a database - i.e. if your script includes a table creation or drop, be sure to check for existance first, otherwise your script will fail if executed back to back.
Again, while it's possible to build a manual protocol I'd still fall back on using one of the purpose-built tools out there, and both Team System and SQL Compare will be able to output scripts that you could include as part of an installation/upgrade package.
With database updates I always believe it should be all or nothing. If any of the DB updates fail your application will be left in an unknown state that could be harmful to the data so I think it is best practice to either apply them all or none (1 transaction around them all).
I also like to backup the database before applying updates so that if anything does go wrong the database can be rolled back (this has saved me numerous times when working with live data).
Hope this helps.
Best practices for upgrading a production database schema actually look pretty bad on the surface. Unless you can completely shut down your system for the upgrade, which is often not possible, your changes all need to be backwards compatible. If you have many clients accessing the database, you can't update them all simultaneously, so any schema changes you make need to allow old code to run.
That means never renaming a column, and making all new columns nullable. This doesn't mean you leave it like that forever. You write two scripts, one for the initial change, which is backwards compatible, then another to clean things up after all clients have been updated.
Automated tools are great for validation of schemas, but they are not so good when it comes to actually modifying a complex system. You should break your changes up into many small, discrete change scripts so each can be run manually. If there's a failure, it's easier to pinpoint the cause and fix it. Basically, each feature gets its own script. Give each a unique name and then store that name in the database itself when you run the script so you can query the database to find out what's been run and what hasn't. This is invaluable when you have instances on developer's machines, test servers, production, etc.
Sometimes we have upwards to 4-6 people either RDPed looking at data in SQL Management Studio, or hitting the server with LINQpad, Toad, etc from various locations while developing in mostly ASP.NET and Flex with WebOrb. Is this bad? Bad in the sense that we are trying to keep our live production app stable and as lag free as possible for global users?
i don't think i'd do that. if it was just me, then sure:) but if there's a bunch of people god only knows what queries they might run. we always use a test server for such things.
best regards,
don
Best practice would be separate servers. Next best, separate instances on same server. Next best, separate databases on a instance.
However, I wouldn't be letting any developers RDP into a production SQL Server (or production anything), regardless of choice of segregation mechanism. Use a separate terminal server with tools and everything there.
You can have dev and prod db on the same instance. Just make sure the permission are setup so that developers cannot touch the prod db. The negative is a long running query in dev will impact prod.
In SQL SERVER 2005 a better solution is to have a dev "instance" and a prod "instance".
Then is someone mis-behaves on the dev instance you and just bring down that insance.
In SQL server 2008 you can setup up CPU usage plans which can help throttle how much resources can be used. You should investigate that.
It depends on a lot of variables. It's generally better to have them on different servers. This is really depending on how you use sql server. If you just have databases, don't use a lot of the management tools, like nightly processes to alter data and other jobs you might be OK. You are running a real risk of having bleed over code from developing on the dev database to the production one though. It's safer to have them separated out, especially for the small amount of money needed to create a dev instance of sql server.
I find this a poor practice for several reasons:
First suppose one of your devs messes up and does something that ends up taking all of the processing power of your server. Oops prod is down for no good reason.
Second, devs could easliy change the wrong database. Oops prod is down for no good reason. At least you can avoid this by not giving any production rights to devs (which you should be doing anyway as a best practice.)
Third, if the database is the on the same server it has to have a different name, this can make moving things to prod difficult and error prone. I think it also means it will be less likely that you deploy correctly through source controlled scripts. If you choses to copy objects from one database to the other, then you can have issues with that as well. First if there is data in the object already, you may accidentally wipe it out (hope you have a backup) or you may move the new table structure but miss things like the PKs and FKS and default values and triggers and constraints and indexes or the wizard might take much longer to do the move because in the background it is creating and populating a new table and then droping the old and renaming the new one rather than using alter table. Oops prod is down or seriously slowed for no good reason.
I tend to agree with the "separate servers" folks, although with my company we actually do most of our day to day development work on our local machines -- so we have SQL Server installed locally. This can be a pain, of course, if you're developing reporting or something that needs production data. In that scenario, developers here usually get a subset of production data exported to work with.
For acceptance testing vs. deployment though, we do use separate instances.
Developers probably shouldn't have production access UNLESS they're also the ones who do application deployments (as can be the case with small teams like the one I'm in). If you do end up using separate DBs on the same server, I would at least lock down RDP access and grant access to each development DB on an individual basis. That's how it works here -- I don't have admin rights to any of our servers at this time, and can only admin databases for applications that belong specifically to my team.
It depends how much you value your live service. I know I wouldn't trust me and my fat hands running SQL on the same hardware as a live application.
Even if the application is not business critical, and the app is not data-bound, you can set up a development environment on an unused desktop machine, so why wouldn't you do that instead of take the risk?
The set up I use is typically DEV database on a local instance of SQL Server (Development Version for me, but Express would probably also work), a QA database on a test instance of SQL Server. In our environment, this is located on a virtual instance of W2K3 -- soon to be W2K8. Production databases live either on dedicated instances of SQL server or on one of various clustered instances. We don't mix PROD/QA/DEV at all. I use RedGate SQL Compare to synchronize schemas between the various systems, including different developer instances of the database.
It will be 'OK' as much as the team don't had any administrator privileges over the server (either SQL or Windows), and their user log-ins just grant access to potentially destroy just the development database and it's associated files, having denied access to production databases
For other application testing reasons, we created a copy of our production server (which is a virtual server) on a separate domain. This allowed the Windows Server Name, SQL Serer Name, Database name to be exactly the same (lots of settings on 3rd party apps require this level of configuration to get different processes to work.). Now we can rebuild a test environment by creating an exact virtual image of our production server.
I was sceptical about running SQL Server on a virtual machine, but it has given our small company a lot of flexibility. We like to think our databases are critical, but it is for internal uses and having some down time would just have workers shift their lunch hour.