Best optimizing a large DB for primary key queries - database

Suppose you have a very large database, and to simplify lets say it consists of one major table you will be doing your lookups on with one (and only one) primary key field - pk.
Given the fact that all lookups are going to be basically SELECT * FROM table_name WHERE pk=someKeyValue, what is the best way to optimize this database for the fastest lookups?
Edit: just a few more details - INSERTs and UPDATEs are going to be very non-frequent so I don't mind sacrificing performance there to achieve better lookup performance.
Also, seems like clustering is the way to go. Do you have any examples of the kind of increase in performance I can achieve with this method? And how exactly is this done (on any kind of DB)?

If the primary key is clustered, then you won't get any quicker.
If it isn't clustered, and the number of columns in your table is relatively small, then you could in theory create a covering index to speed up the query. But then this negates any insert/update performance enhancements that having the non-clustered primary key would have given you.
If your primary key is an always-increasing field (e.g. a SQL Server identity, or generated from a sequence in Oracle) then the clustered primary key has no drawbacks anyway.

One thing you could do is make the primary key clustered, this results in the actual data being physically ordered on the disk, resulting in faster queries.
It will also mean slower inserts, but if you select much more frequently than you insert, this should not be a problem.

If you're using MySQL, you can do some additional things (beyond tuning your cache values). The table engine can be a factor; for instance, MyISAM is widely held to be faster at SELECTs than InnoDB. If this table is primarily a lookup table, and you were using MySQL, that might be a good thing to do. (InnoDB is pretty good on average; it's better on writes than MyISAM, and also, InnoDB never needs to be repaired.)

I have to add two more options to all that was proposed above (I like dwc’s answer). You should consider partitioning if your table is really big.
First, horizontal partitioning (especially if I/O is bottleneck in your DB). You create several filegroups and locate them on different hard drives. Then, create Partition Function, Partition Scheme to divide your table and put parts of your table on separate HDs (like rows 1-499999 to the F: drive, 500000-999999 to the G: drive, and so on) .
Second, vertical partitioning. This would work if you select column sets (not *) in most of your queries. In this case, divide columns in the table in two groups: first, fields that you need in all queries; second, fields that you rarely need. Create two tables with the same primary key. Use JOINs on the primary key when you need columns from both tables.
(This answer pertains to SQL Server 2005/2008.)

If all your queries are going to be based off the PK, you wouldn't get any added benefit by setting an index on the PK since it should already be indexing by that.
Edit: The only other possible things I would suggest is looking at normalizing your table (if that is even an option or necessity). By splitting off items into other tables, you can refine what is being pulled back in each query and only pull the less-used items when needed using joins.
Based off the limited description of "a very large database with a single table" it is hard to locate any easy and obvious ways to optimize without looking at what kind of data you are actually storing in your fields.

If your PK order matches insertion order, i.e. time or id/autoincrement, then make it clustered. This will reduce disk and cache thrashing on inserts, leaving more resources to devote to lookups.
Consider tweaking page sizes on the table to be an exact multiple of your record size. This requires intimate knowledge of the particular database software for details of how, and record/index overhead, etc.
If practical, use fixed-size for all columns rather than variable size.
Consider putting the index and/or transaction log files on a separate volume.
Install as much RAM as the software and hardware can use.

If you were using Oracle then I'd advise benchmarking three approaches:
Heap table with primary key index
Index-organised table
Single table hash cluster
1 represents a very vanilla approach -- really it's the lowest common denominator, but could mean 5+ logical reads to get each row, with one of those being a probable physical read of the table if it is not completely cached.
2 will save you one of those logical read by avoiding the probe to a separate table segment, but might not save you the physical read because the IOT segment will be larger and harder to cache than the index alone.
3 will potentially get you the row with a single logical read, but unless you have the entire table cached that's probably going to translate into a physical read.
Benchmarking is highly recommended.

Related

SQL Server: ~2000 Heap Tables all using GUID Uniqueidentifier - Possible Clustered Indexing?

I have just taken over a database which has around 2200 tables. Over 2000 of these have no clustered index (some have no indexes at all).
All of the tables have been configured to use a GUID as the uniqueidentifier.
Just looking at the query plans, I can see that there are many table scans occurring. Most searches use the uniqueidentifier to search on.
I am wondering if it is better to have a clustered index on the GUID than not to have a clustered index at all. I imagine that a clustered index on a 16-byte column will inevitably lead to fragmentation.
I could arguably cluster on other columns but the majority of searches tend to search by or join via the GUIDS.
Any advice would be very much welcomed. I've never seen so many GUID's!!
In generally, I would recommend having an identity column as the primary key and use that for clustering. This is also a better choice for joins.
Why? First, identity keys are generally shorter that unique ids. So, foreign key references and indexes are smaller.
More importantly, inserts would always go at the "end" of the table. When using GUIDs, inserts are often going to cause fragmentation. If you are inserting rows, I would say that a secondary index on the GUID might be better than a clustered index (the fragmentation is only in the index).
With 2000 tables, I doubt you will change the structure. You can ameliorate the fragmentation using newsequentialid().
GUID column with random values usually is not the best choice for a clustered index because it could be the root cause of an index fragmentation:
Read ahead opportunity of the database won't be effective;
The cost of insert operations will be too expensive, because in this case you'll got lots of page split overhead;
There are 3 ways how you can live with that:
Schedule planning index reorganizing and rebuilding which will reduce index fragmentation and improve your statistics automatically;
Use
newsequantialid for generating values of this column;
Generate
GUID value sequantialy outside of the database (Guid.Comb
Identifier is a great example of solving this issue in
NHibernate).
This is a really a comment on your question to Gordon's good answer:
Firstly, don't forget to check the index DMVs to see which ones are being used (or not used) and have a look at the expensive query plans in the cache to focus on the tables and queries that will be causing most pain. I would expect that many of those 2200 tables are relatively small & the queries are able to look up pretty quickly even from the guid clustered index.
For those tables that aren't clustered, clustering on the guid would reduce fragmentation, since it forces all the data for the table to be colocated rather than allowing pages to be put in the next free extent & spreading tables all over the disk. This should make some of the I/O more efficient.
Check you have a low enough fillfactor so that your regular index rebuilds avoid page splitting in advance, although it will also be workload dependent (OLTP vs DW and read/write ratio of table)
If you have applications that are doing explicit column selects/inserts then you may be able to add an identity column without breaking anything. That allows you cluster around the identity & add an index to the guid. Whether this really helps depends on the relative (in)efficency of the new plans.
You could consider clustering around a non-guid field where queries will lookup against it fairly regularly (eg, a date range) and index the guid separately.
You'd have to look at the queries & relative performance for that more closely.

When should I use Oracle's Index Organized Table? Or, when shouldn't I?

Index Organized Tables (IOTs) are tables stored in an index structure. Whereas a table stored
in a heap is unorganized, data in an IOT is stored and sorted by primary key (the data is the index). IOTs behave just like “regular” tables, and you use the same SQL to access them.
Every table in a proper relational database is supposed to have a primary key... If every table in my database has a primary key, should I always use an index organized table?
I'm guessing the answer is no, so when is an index organized table not the best choice?
Basically an index-organized table is an index without a table. There is a table object which we can find in USER_TABLES but it is just a reference to the underlying index. The index structure matches the table's projection. So if you have a table whose columns consist of the primary key and at most one other column then you have a possible candidate for INDEX ORGANIZED.
The main use case for index organized table is a table which is almost always accessed by its primary key and we always want to retrieve all its columns. In practice, index organized tables are most likely to be reference data, code look-up affairs. Application tables are almost always heap organized.
The syntax allows an IOT to have more than one non-key column. Sometimes this is correct. But it is also an indication that maybe we need to reconsider our design decisions. Certainly if we find ourselves contemplating the need for additional indexes on the non-primary key columns then we're probably better off with a regular heap table. So, as most tables probably need additional indexes most tables are not suitable for IOTs.
Coming back to this answer I see a couple of other responses in this thread propose intersection tables as suitable candidates for IOTs. This seems reasonable, because it is common for intersection tables to have a projection which matches the candidate key: STUDENTS_CLASSES could have a projection of just (STUDENT_ID, CLASS_ID).
I don't think this is cast-iron. Intersection tables often have a technical key (i.e. STUDENT_CLASS_ID). They may also have non-key columns (metadata columns like START_DATE, END_DATE are common). Also there is no prevailing access path - we want to find all the students who take a class as often as we want to find all the classes a student is taking - so we need an indexing strategy which supports both equally well. Not saying intersection tables are not a use case for IOTs. just that they are not automatically so.
I'd consider them for very narrow tables (such as the join tables used to resolve many-to-many tables). If (virtually) all the columns in the table are going to be in an index anyway, then why shouldn't you used an IOT.
Small tables can be good candidates for IOTs as discussed by Richard Foote here
I consider the following kinds of tables excellent candidates for IOTs:
"small" "lookup" type tables (e.g. queried frequently, updated infrequently, fits in a relatively small number of blocks)
any table that you already are going to have an index that covers all the columns anyway (i.e. may as well save the space used by the table if the index duplicates 100% of the data)
From the Oracle Concepts guide:
Index-organized tables are useful when
related pieces of data must be stored
together or data must be physically
stored in a specific order. This type
of table is often used for information
retrieval, spatial (see "Overview of
Oracle Spatial"), and OLAP
applications (see "OLAP").
This question from AskTom may also be of some interest especially where someone gives a scenario and then asks would an IOT perform better than an heap organised table, Tom's response is:
we can hypothesize all day long, but
until you measure it, you'll never
know for sure.
An index-organized table is generally a good choice if you only access data from that table by the key, the whole key, and nothing but the key.
Further, there are many limitations about what other database features can and cannot be used with index-organized tables -- I recall that in at least one version one could not use logical standby databases with index-organized tables. An index-organized table is not a good choice if it prevents you from using other functionality.
All an IOT really saves is the logical read(s) on the table segment, and as you might have spent two or three or more on the IOT/index this is not always a great saving except for small data sets.
Another feature to consider for speeding up lookups, particularly on larger tables, is a single table hash cluster. When correctly created they are more efficient for large data sets than an IOT because they require only one logical read to find the data, whereas an IOT is still an index that needs multiple logical i/o's to locate the leaf node.
I can't per se comment on IOTs, however if I'm reading this right then they're the same as a 'clustered index' in SQL Server. Typically you should think about not using such an index if your primary key (or the value(s) you're indexing if it's not a primary key) are likely to be distributed fairly randomly - as these inserts can result in many page splits (expensive).
Indexes such as identity columns (sequences in Oracle?) and dates 'around the current date' tend to make for good candidates for such indexes.
An Index-Organized Table--in contrast to an ordinary table--has its own way of structuring, storing, and indexing data.
Index organized tables (IOT) are indexes which actually hold the data which is being indexed, unlike the indexes which are stored somewhere else and have links to actual data.

I have one table with ten foreign keys. How much of a hit am I going to take on insertion if I have indexes on all ten keys?

The consensus seems to be that all foreign keys need to have indexes. How much overhead am I going to incur on inserts if I follow the letter of the law?
NOTES:
Assume that the database is a good design, and that all of the joins are legitimate.
All Primary and Foreign Keys are of type Int.
Some tables are lookup tables, with fewer than ten records, that are not likely to grow in size.
It is an OLTP database.
Some of the joins are to lookup tables with fewer than 10 records.
Here is a decent list of examples of when and what type of index to use. I don't think you should accept the "law" and index everything. You need to define what will be used in the query joins and index accordingly
There is a significant performance penalty on inserts as all of the indexes need to be updated. Roughly, you will incur one disk write for the insert on the large table and slightly more than one (on average) for each index on the table. Each index leaf node will incur a write, and some additional writes will occur from time to time as the leaf and (less often) parent nodes split.
Each table and index write will also incur log traffic. The particularly nasty penalty is on bulk inserted data, as active indexes on tables where you are inserting bulk loaded data will be updated for each row - and these updates are not minimally logged. This will massively blow out your I/O (which will all be random access rather than nice sequential bulk writes) and will also generate vast amounts of log traffic.
There's no need to put an index on foreign keys that point into lookup tables with small numbers of elements.
The only possible way to answer you question is to test. For instance, if any of the keys have a cardinality of 10, they probably won't be very helpful. So you've got some work to do testing. But it has a lot to do with your table sizes, the key sizes, the absolute activity level and the mix of CRUD elements. Distrust all simple answers.
EDIT:
If you have no data currently because this is the initial design, start with only the obvious indexes and add others as you need them based on testing. It makes little sense to add them all unless it's a low-change database. But if it's read-only, there's not much penalty at all. (Another piece of information you haven't provided.)
The consensus seems to be that all foreign keys need to have indexes. How much overhead am I going to incur on inserts if I follow the letter of the law?
There are two overheads: on DML over the referencing table, and DML over the referenced table.
A referenced table should have an index, otherwise you won't be able to create a FOREIGN KEY.
A referencing table can have no index. It will make the INSERT's into the referencing table a little bit slower, and won't affect INSERT's into a referenced table.
Whenever you insert a row into a referencing table, the following occurs:
The row is checked against the FOREIGN KEY as in this query:
SELECT TOP 1 NULL
FROM referenced ed
WHERE ed.pk = #new_fk_value
The row is inserted
The index on the row (if any) is updated.
The first two steps are always performed, and the step 1 generally uses an index on the referenced table (again, you just cannot create a FOREIGN KEY relationship without having this index).
The step 1 is the only overhead specific to a FOREIGN KEY.
The overhead of the step 3 is implied only by the fact the index exists. It would be exactly the same in there were no FOREIGN KEY.
But UPDATE's and DELETE's from the referenced table can be much slower if you don't define an index on the referencing table, especially if the latter is large.
Whenever you DELETE from the referenced table, the following occurs:
The rows are checked against the FOREIGN KEY as in this query:
SELECT TOP 1 NULL
FROM referencing ing
WHERE ing.fk = #old_pk_value
The row is deleted
The index on the row is updated.
It's easy to see that this query will most probably benefit from an index on referencing.fk.
Otherwise, the optimizer will need to build a HASH TABLE over the whole table even if you are deleting a single record to check the constraint.
The only way to know the impact is to test. The answer may differ greatly depending on whether your system tends to insert large amounts of data in a bulk insert or one record at a time from the user interface. It also depends a lot on the size of the tables and the total number of indexes. Testing is the only way to know for certain what indexes you should use. A general rule of thumb is to start by indexing foreign key fields and fields you will be using in the where clauses. But that's just where to start looking at your system, not the "be all - end all" answer.
I will say that I have observed that users tend to be more tolerant of a little longer time spent on insert than they are of more time spent on querying the system. This is especially true since senior managers tend to do more querying than data entry and they can get cranky and have the power to do something about it if they feel their time is being wasted.
In a new system you need to generate test records at the expected volumn the system will have when implemented. If you don't then you will find that the queries (and design) that worked ok in a same test bed can be horrible with real users doing multiple things simultaneously against large tables. It's no fun at all to refactor a database where performance wasn't considered and tested in the design. It's no fun to pull back production changes becasue the query takes longer than the timeout setting because the developer didn't test against the true volumn (or in the case of the new project, the expected volumn).
SQL Server has tools to help you determine the best indexes. Use the indexing wizard and the executions plans to see where you need indexes. Put indexes on the fields and test inserts to see if there is a negative impact. There is no one right answer. It won't even stay the same answer for the lifetime of your database in all likelihood.
Insert/update/delete always hits the index and writes to it. Select sometimes hits the index to read from it, depending on the query optimizer's analysis or best guess. If you don't need an index to speed up reads (such as if the column only has a low number of potential values), then get rid of it.
If you have a billion rows in a child table and wish to delete 100 million of them because you're deleting one row from the parent table where that row is the the parent to all 100 million of the child rows, then having an index will only slow the whole operation down because the system has to delete from the index too, but won't speed the operation up because the system will not use the index to speed up choosing which rows to delete.
I know performance is a critical issue.
IMO, you should consider the ramifications of not having an index (and therefore no FK) on OLTP data. You can suffer data integrity issues on such a system.
Thank you all for your input.
Based on your feedback, I think I will add indexes to all of the foreign keys EXCEPT those pointing to lookup tables (containing a small number of records that are not likely to change). This will cut the number of required foreign key indexes in half (from ten to five).
If anyone has further insight, feel free to post new answers. I still have some votes left. :)
Are the fields going to be used in searching and sorting? If so an index might be a good idea. Only way to know is to test measure and test again
Edit: The look table will probally be cached but that won't help a search query against the referencing table. Your data table that is.

When do you use table clusters?

How do you determine when to use table clusters? There are two types, index and hash, to use for different cases. In your experience, have the introduction and use of table clusters paid off?
If none of your tables are set up this way, modifying them to use table clusters would add to the complexity of the set up. But would the expected performance benefits outweight the cost of increased complexity in future maintenance work?
Do you have any favorite online references or books that describe table clustering well and give good implementation examples?
//Oracle tips greatly appreciated.
The killer feature of table clusters is that you can store related rows of different tables at the same physical location.
That can improve join performance by an order of magnitude. However, it doesn't pay of so often as it sounds.
The only time I used it was a three-table join, executed by two hash joins. It took too long ;). However, the join was on the same column, so it was possible to use a hash table cluster keyed by the join column. That caused all related rows to be stored alongside (ideally, in the same database block). Knowing that, Oracle can execute the join with a special optimization ("cluster join").
It's more or less pre-joined, but still feeling like normal tables (for INSERT/SELECT/UPDATE/DELETE).
On the other hand, there are "single table clusters" that are mostly used to control the "clustering factor" -- A similar idea like clustered indexes (called Index-Organized-Table in Oracle) but not adding high cost if using a secondary index.
One can speak a lot about clustering, but I found that almost ultimate explanation about Oracle clusters (pros and cons, when to use and how to use) can be found in Tom Kyte's book - Effective Oracle by Design, also you can search asktom for some specific cluster usage examples (1, 2 etc). You should definitely take a look at this book if you haven't yet.
Some info you can also find here.
But the thing you should always do before creating complex schema structures is to try, to test, to benchmark and choose the one solution that best fits your needs :)
Hope this helps.
I haven't used Oracle's table clusters myself, but I understand that its index table clusters are very much like MS SQL Server's clustered indexes. That is, the row data is physically organized by the clustered index's key.
That makes one ideal for a heavily-accessed column that has a reasonably small number of possible values (compared to the total number of rows), where most queries want to retrieve all rows with a particular value. Because all such rows are physically stored together, disk I/O, particularly seek time, is reduced.
"Reasonably small" is not easily defined, but postal or zip codes in an address table seems reasonable if you're often querying for all addresses in a single code's region. Province/state/territory codes are likely too small a selection for a country-wide address table.
So, you don't want to use them on columns with few possible values (e.g., M/F for gender) because then the clustering doesn't buy you anything and likely costs you for insertions. You also never want to use clustering on "autonumber" surrogate key columns (from sequences in Oracle) because that will create a "hot spot" in the last extent of the table as all insertions must physically happen there. You also don't want to apply clustering to a column value that will be updated because the RDBMS will have to physically move the record to maintain the clustered ordering.

Tables with no Primary Key

I have several tables whose only unique data is a uniqueidentifier (a Guid) column. Because guids are non-sequential (and they're client-side generated so I can't use newsequentialid()), I have made a non-primary, non-clustered index on this ID field rather than giving the tables a clustered primary key.
I'm wondering what the performance implications are for this approach. I've seen some people suggest that tables should have an auto-incrementing ("identity") int as a clustered primary key even if it doesn't have any meaning, as it means that the database engine itself can use that value to quickly look up a row instead of having to use a bookmark.
My database is merge-replicated across a bunch of servers, so I've shied away from identity int columns as they're a bit hairy to get right in replication.
What are your thoughts? Should tables have primary keys? Or is it ok to not have any clustered indexes if there are no sensible columns to index that way?
When dealing with indexes, you have to determine what your table is going to be used for. If you are primarily inserting 1000 rows a second and not doing any querying, then a clustered index is a hit to performance. If you are doing 1000 queries a second, then not having an index will lead to very bad performance. The best thing to do when trying to tune queries/indexes is to use the Query Plan Analyzer and SQL Profiler in SQL Server. This will show you where you are running into costly table scans or other performance blockers.
As for the GUID vs ID argument, you can find people online that swear by both. I have always been taught to use GUIDs unless I have a really good reason not to. Jeff has a good post that talks about the reasons for using GUIDs: https://blog.codinghorror.com/primary-keys-ids-versus-guids/.
As with most anything development related, if you are looking to improve performance there is not one, single right answer. It really depends on what you are trying to accomplish and how you are implementing the solution. The only true answer is to test, test, and test again against performance metrics to ensure that you are meeting your goals.
[Edit]
#Matt, after doing some more research on the GUID/ID debate I came across this post. Like I mentioned before, there is not a true right or wrong answer. It depends on your specific implementation needs. But these are some pretty valid reasons to use GUIDs as the primary key:
For example, there is an issue known as a "hotspot", where certain pages of data in a table are under relatively high currency contention. Basically, what happens is most of the traffic on a table (and hence page-level locks) occurs on a small area of the table, towards the end. New records will always go to this hotspot, because IDENTITY is a sequential number generator. These inserts are troublesome because they require Exlusive page lock on the page they are added to (the hotspot). This effectively serializes all inserts to a table thanks to the page locking mechanism. NewID() on the other hand does not suffer from hotspots. Values generated using the NewID() function are only sequential for short bursts of inserts (where the function is being called very quickly, such as during a multi-row insert), which causes the inserted rows to spread randomly throughout the table's data pages instead of all at the end - thus eliminating a hotspot from inserts.
Also, because the inserts are randomly distributed, the chance of page splits is greatly reduced. While a page split here and there isnt too bad, the effects do add up quickly. With IDENTITY, page Fill Factor is pretty useless as a tuning mechanism and might as well be set to 100% - rows will never be inserted in any page but the last one. With NewID(), you can actually make use of Fill Factor as a performance-enabling tool. You can set Fill Factor to a level that approximates estimated volume growth between index rebuilds, and then schedule the rebuilds during off-peak hours using dbcc reindex. This effectively delays the performance hits of page splits until off-peak times.
If you even think you might need to enable replication for the table in question - then you might as well make the PK a uniqueidentifier and flag the guid field as ROWGUIDCOL. Replication will require a uniquely valued guid field with this attribute, and it will add one if none exists. If a suitable field exists, then it will just use the one thats there.
Yet another huge benefit for using GUIDs for PKs is the fact that the value is indeed guaranteed unique - not just among all values generated by this server, but all values generated by all computers - whether it be your db server, web server, app server, or client machine. Pretty much every modern language has the capability of generating a valid guid now - in .NET you can use System.Guid.NewGuid. This is VERY handy when dealing with cached master-detail datasets in particular. You dont have to employ crazy temporary keying schemes just to relate your records together before they are committed. You just fetch a perfectly valid new Guid from the operating system for each new record's permanent key value at the time the record is created.
http://forums.asp.net/t/264350.aspx
The primary key serves three purposes:
indicates that the column(s) should be unique
indicates that the column(s) should be non-null
document the intent that this is the unique identifier of the row
The first two can be specified in lots of ways, as you have already done.
The third reason is good:
for humans, so they can easily see your intent
for the computer, so a program that might compare or otherwise process your table can query the database for the table's primary key.
A primary key doesn't have to be an auto-incrementing number field, so I would say that it's a good idea to specify your guid column as the primary key.
Just jumping in, because Matt's baited me a bit.
You need to understand that although a clustered index is put on the primary key of a table by default, that the two concepts are separate and should be considered separately. A CIX indicates the way that the data is stored and referred to by NCIXs, whereas the PK provides a uniqueness for each row to satisfy the LOGICAL requirements of a table.
A table without a CIX is just a Heap. A table without a PK is often considered "not a table". It's best to get an understanding of both the PK and CIX concepts separately so that you can make sensible decisions in database design.
Rob
Nobody answered actual question: what are pluses/minuses of a table with NO PK NOR a CLUSTERED index.
In my opinion, if you optimize for faster inserts (especially incremental bulk-insert, e.g. when you bulk load data into a non-empty table), such a table: with NO clustered index, NO constraints, NO Foreign Keys, NO Defaults and NO Primary Key, in a database with Simple Recovery Model, is the best. Now, if you ever want to query this table (as opposed to scanning it in its entirety) you may want to add a non-clustered non-unique indexes as needed but keep them to the minimum.
I too have always heard having an auto-incrementing int is good for performance even if you don't actually use it.
A Primary Key needn't be an autoincrementing field, in many cases this just means you are complicating your table structure.
Instead, a Primary Key should be the minimum collection of attributes (note that most DBMS will allow a composite primary key) that uniquely identifies a tuple.
In technical terms, it should be the field that every other field in the tuple is fully functionally dependent upon. (If it isn't you might need to normalise).
In practice, performance issues may mean that you merge tables, and use an incrementing field, but I seem to recall something about premature optimisation being evil...
Since you are doing replication, your are correct identities are something to stear clear of. I would make your GUID a primary key but nonclustered since you can't use newsequentialid. That stikes me as your best course. If you don't make it a PK but put a unique index on it, sooner or later that may cause people who maintain the system to not understand the FK relationships properly introducing bugs.

Resources