I have a database that will be used by multiple clients (local installs), the plan is to then copy the data to Azure to allow global reporting etc.
The database will be using GUIDs for it's primary keys.
What should I use for a clustered index on the tables, or does that not matter when adding data to Azure? Do I even really need a clustered index? Azure will have single copy of the database, with all client data in it if that makes a difference.
thanks all.
Although you are allowed to create (and have data in) a table without clustered index in SQL Server, this is not allowed in Windows Azure SQL Database ( WASD / SQL Azure). While you can have the table without clustered index in WASD as definition, no DML statement will be allowed to execute against such a table, i.e. you will not be able to do INSTERT/UPDATE/DELETE statements against table in WASD without clustered index. So, if by any chance data is going to the cloud, you should have a clustered index. For more info, check the Clustered Index Requirement in the Guildelines and limitations for Windows Azure SQL Database.
Some of the recommendations here are incorrect.
NEWSEQUENTIALID() is not allowed on SQL Azure.
In SQL Azure, a clustered index is absolutely required. You can create a table
without one, but you will not be able to add data to it until after
you add the clustered index.
In Azure, the clustered index is used for their back-end replication. Reference: http://blogs.msdn.com/b/sqlazure/archive/2010/05/12/10011257.aspx
I think that your best bet is to use a column with an Identity element as the clustered index, along with a non-clustered index on your guid column. I ran into this exact same problem and after quite a bit of research, it's the solution I eventually came up with. It's a bit of a pain to put together, especially if you already have data in production on Azure, but it seems to be the one that addresses all of the issues.
I feel like it would be simplest to use a NEWSEQUENTIALID, but that isn't an option on Azure.
Related
I have taken over a project with an existing SQL server installation. The client wants to move everything to the azure SQL and make several on premises databases sync to azure.
The PK's in the tables are int's and for the Azure datasync to work PK's needs to be guid's. the database consists of several related tables.
My question is therefore. What is the best way to change the PK's to guids and at the same time update the FK's accordingly in existing tables.
The process as far as I see it:
1. Create new guid column
2. fill it with ID's.
3. change the PK to the guid column
4. update data to new guids in the FK tables.
Is there an easy scriptable way to make this magically happen?
No there is nothing built into SQL Server that makes this any easier than the process you described already.
I was recently developing a .Net MVC web application and send the database schema to a DBA I work with to get the database built on a production DB server. The DBA asked me if I needed primary keys in all my tables. I said yes, for the primary reason, that it is good DB design practice. When I asked why the DBA told me that it is preferred to minimize the number of tables in our organization's database servers with primary keys to conserve resources. Is there some sort of detriment to having primary keys in data tables?
When you make a 'join table' the primary keys from each contributing table, form a composite key for the join table. It is then quite possible that this composite key can be indexed.
Inefficient indexing strategies can degrade performance.
An example is the 'InnoDB' engine for MySQL, this is one I work with a lot. With InnoDB every index entry is concatenated with a value of the corresponding primary key. When a query reads a record via a secondary index, this is used with the primary key, to find the record.
So the primary key could effect performance especially if it is something big like a java UUID (128bytes).
I'm creating a DB using SQL Server 2008.
This DB will be used in two countries and at some time (every day) they will be synchronized, I'll use the Replication service to accomplish that.
Most of the tables are using an Int column with Identity increment. But the tables will be empty when deployed so both countries will have a row with identity 1, 2, and son. I've never use replication before so I wanna know if there will be an error when the tables are synchronized?
Should I use a GUID data type instead?
Replicate Identity Columns (MSDN):
Replication offers three identity range management options:
Automatic. Used for merge replication and transactional replication with updates at the Subscriber...
Manual. Used for snapshot and transactional replication without updates at the Subscriber...
None. This option is recommended only for backwards compatibility...
So, yes, you can continue to use IDENTITY, provided you read through the information on replication and choose an option that makes sense for you.
Under Automatic, what it does is each server grabs a range of usable identity values and hands the individual values out as needed. Provided synchronization occurs often enough so that the ranges aren't completely exhausted, you'll never notice this detail.
And this allows you to scale out later as needed - as opposed to e.g. a MOD scheme where one server hands out odd values and the other even - you can't easily add a third server to such a scheme.
By your description, it sounds like you want to implement so called Merge replication.
In SQL Server you would not need to change the identity to a GUID, however, if you don't SQL server will automatically add another column called rowguid for each table and you may end up with duplicates of your original identity column. To circumvent this, you could have the servers assign mod 2 IDs.
In my opinion it makes most sense to use a GUID for the IDs altogether. Don't forget to set the ROWGUIDCOL property on your identity columns. Good luck.
Relevant MSDN:
http://technet.microsoft.com/en-us/library/ms152746.aspx
Consider adding a deviceID field to all tables users can update. With each device making changes using its own ID as part of the PK, there cannot be conflicts across devices.
What's the best practice for handling primary keys using an ORM over Oracle or SQL Server?
Oracle - Should I use a sequence and a trigger or let the ORM handle this? Or is there some other way ?
SQL Server - Should I use the identifier data type or somehow else ?
If you are using any kind of ORM, I would suggest you to let it handle your primary keys generation. In SQL Server and Oracle.
With either database, I would use a client-generated Guid for the primary key (which would map to uniqueidentifier in SQL Server, or RAW(20) in Oracle). Despite the performance penalty on JOINs when using a Guid foreign key, I tend to work with disconnected clients and replicated databases, so being able to generate unique IDs on the client is a must. Guid IDs also have advantages when working with an ORM, as they simplify your life considerably.
It is a good idea to remember that databases tend to have a life independent from a front end application. Records can be inserted by batch processes, web services, data exchange with other databases, heck, even different applications sharing the same database.
Consequently it is useful if a database table is in charge of its own identify, or at least has that capability. For instance, in Oracle a BEFORE INSERT trigger can check whether a value has been provided for its primary key, and if not generate its own.
Both Oracle and SQL Server can generate GUIDs, so that is not a sufficient reason for delegating identity generation to the client.
Sometimes, there is a natural, unique identifier for a table. For instance, each row in a User table can be uniquely identified by the UserName column. In that case, it may be best to use UserName as the primary key.
Also, consider tables used to form a many to many relationship. A UserGroupMembership table will contain UserId and GroupId columns, which should be the primary key, as the combination uniquely identifies the fact that a particular user is a member of a particular group.
I've just come across something disturbing, I was trying to implement transactional replication from a database whose design is not under our control . This replication was in order to perform reporting without taxing the system too much. Upon trying the replication only some of the tables went across.
On investigation tables were not selected to be replicated because they don't have a primary key, I thought this cannot be it is even shown as a primary key if I use ODBC and ms access but not in management studio. Also the queries are not ridiculously slow.
I tried inserting a duplicate record and it failed saying about a unique index(not a primary key). Seems to be the tables have been implemented using a unique index as oppose to a primary key. Why I do not know I could scream.
Is there anyway to perform transactional replication or an alternative, it needs to be live (last minute or two). The main db server is currently sql 2000 sp3a and the reporting server 2005.
The only thing I have currently thought of trying is setting the replication up as if it is another type of database. I believe replication to say oracle is possible would this force the use of say an ODBC driver like I assume access is using hence showing a primary key. I don't know if that is accurate out of my depth on this.
As MSDN states, it is not possible to create a transactional replication on tables without primary keys. You could use Merge replication (one way), that doesn't require a primary key, and it automatically creates a rowguid column if it doesn't exist:
Merge replication uses a globally
unique identifier (GUID) column to
identify each row during the merge
replication process. If a published
table does not have a uniqueidentifier
column with the ROWGUIDCOL property
and a unique index, replication adds
one. Ensure that any SELECT and INSERT
statements that reference published
tables use column lists. If a table is
no longer published and replication
added the column, the column is
removed; if the column already
existed, it is not removed.
Unfortunately, you will have a performance penalty if using merge replication.
If you need to use replication for reporting only, and you don't need the data to be exactly the same as on the publisher, then you could consider snapshot replication also