Defining a column to be a primary in table on SQL Server - will this make inserts slower?
I ask because I understand this is the case for indexes.
The table has millions of records.
No, not necessarily! Sounds counter-intuitive, but read this quote from Kim Tripp's blog post:
Inserts are faster in a clustered
table (but only in the "right"
clustered table) than compared to a
heap. The primary problem here is that
lookups in the IAM/PFS to determine
the insert location in a heap are
slower than in a clustered table
(where insert location is known,
defined by the clustered key). Inserts
are faster when inserted into a table
where order is defined (CL) and where
that order is ever-increasing.
So actually, having a good clustered index (e.g. on a INT IDENTITY column, if ever possible) does speed things up - even insert, updates and deletes!
Primary keys are automatically indexed, clustered if possible and failing that non-clustered.
So in that sense inserts are slightly affected, but of course having no primary key would usually be much much worse, assuming the table needs a primary key.
First measure, identify a problem and then try to optimize. Optimizing away primary keys is a very bad idea in general.
Not enough to create a perceptual performance hit, and the benefits far outweigh the very minor perforamnce issues. There are very very few scenarios where you should not put a primary key on a table.
The really quick answer is:
Yes.
Primary keys are always indexed (and SQL will attempt a clustered index). Indexes make inserts slower, clustered indexes even more so.
Depending on what your table is used for, you may have a couple of options.
If you do a lot of bulk inserts then reads, you can remove the primary key, insert into a heap (if you have SQL 2008 this can be minimally logged to run even faster) then reassign the key and wait for the index to run.
As an addendum to that, you can also insert using an ORDER BY clause which will keep the inserted rows in correct order for the clustered index. This will really only help if you are inserting millions of rows at once from a source that is already ordered, though.
Yes, adding a primary key to a table will slow inserts (which is OK, because not adding a primary key to your table will speed up your application's eventual catastrophic failure).
If what you're doing is creating a new table and then inserting millions of records into it, there is nothing wrong with initially creating the table without a primary key, inserting all the records, and then creating the primary key. Or use an alternative tool to perform a bulk insert.
Yes, inserts are slowed, especially with several clients doing inserts simultaneously, and more so if your key is sequentially increasing (all inserts occur at the right-most nodes of the index tree, in most database implementations, or at the last page of the table for e.g. clustered SQL Server indices -- both of which scenarios cause resource contention).
That said, SELECTs using the primary key are speeded up quite a bit, and the integrity of your key is guaranteed. Do the right thing first (define primary keys everywhere). Second, measure to see if you cannot meet your performance targets, and whether this is caused by your data integrity constraints. Only then consider workarounds.
No, not necessarily
Regardless, that is not why you would define a primary key on a table.
You define a primary key when it is REQUIRED by the domain model.
Related
I would like optimize the performance of a database that my team is using for an application.
I have been looking for areas to add foreign keys, and in turn index those columns to improve the performance of joins. However, many of our tables are joined on an id that is a GUID type, generated upon insertion of an item, and the data associated with that item in other tables is generally has column item_id containing the GUID.
I have read that adding clustered indexes to GUID type columns is a very bad decision because the index will need to be constantly reconstructed in order to be effective. However, I was wondering, is there any detriment to utilizing a non-clustered index in the scenario described above? Or is it reasonable to assume that it would help performance? I can provide more information if needed.
An index on a <anytype> is by far the best option you have to improve joins and singleton lookups. Lacking this index the query will always have to scan the entire table end-to-end with (often) abysmal performance results and concurrency gone down the drain.
It is true that uniqueidentifier makes poor choice for indexes for the reasons you mention, but by no means does that implies that you should not create these indexes. Changing the data type to INT or BIGINT would be advisable, if possible. Using NEWSEQUENTIALID() or UuidCreateSequential to generate them would help with fragmentation issues. If all alternatives fail you may have to do index maintenance (Rebuild, reorganize) operations more often than for other indexes. But by no means do any of these drawbacks outweigh the benefit of having the index in the first place!
Two performance:
- insert
- select
An index should improve select
An index will slow slow down insert.
If the inserts are in order the index does not fragment.
If the inserts are not in order the index will fragment.
Index fragmentation slows down both insert and select.
Via maintenance can defragment the index.
Adding an non-clustered index to the column that references a FK will help the joins.
Since that column is most likely not ordered that fact it is a GUID is of no loss.
On the FK table itself is where GUID is not a good candidate for a PK (clustered index).
With GUID as PK that index fragments on insert.
Int or sequential ID are better candidates as they would not fragment the PK on insert.
But no big deal just defragment those tables.
Yes, you are better off changing the Guid index from clustered to non-clustered. Guid can still be primary key and you don't need to change your query/source code. No reordering of data and increased performance.
In databases like SQL Azure it is mandatory to have a clustered index. So you could use a date/datetime field. Creating a additional int-identity/autoincrement column is unnecessary as some developers in one team tend to use those and others GUID. Resulting in an inconsistent application. So keep only GUID.. full stop!
Talking about sequential Guids, I think Guids are better created from code than from database. Modern DALs and repository patterns do not prefer dependencies on DB for CRUD. e.g. scenario: linq query and automated builds with unit testing with out DB dependency. And creating a sequential guid ourselves is not a good idea(atleast for me). So Guid as primary Key with a non-clustered index is the best option there is.
I have backing from Microsoft on the non-clustered subject http://blogs.msdn.com/b/sqlazure/archive/2010/05/05/10007304.aspx
Edited: Backing is gone ("No Resource Found")
It would usually help performance. But you may wish to create the index with a fillfactor of less than 100% such that the inevitable page-splits don't have to happen quite so often. Regular maintenance on the index would certainly be a plus.
Yes, a non-clustered index would be ideal for your situation. The underlying is a B-tree, like the clustered index, but the underlying data on the table is not sorted, so the problems with the non-sequential nature of the GUID does not exist. The NC index exist separately from the table.
Be careful to not add too many non-clustered indices though. Optimize only where you need to. Run the profiler to see which queries are taking a long time, and optimize only those. Additionally, be sure to set the fill factor to a value <50% unless the database rarely gets any updates, or space is a constraint.
Relevant MSDN: http://msdn.microsoft.com/en-us/library/ms177484(v=sql.105).aspx
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
The community reviewed whether to reopen this question 2 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I have an application that uses GUID as the Primary Key in almost all tables and I have read that there are issues about performance when using GUID as Primary Key. Honestly, I haven't seen any problem, but I'm about to start a new application and I still want to use the GUIDs as the Primary Keys, but I was thinking of using a Composite Primary Key (The GUID and maybe another field.)
I'm using a GUID because they are nice and easy to manage when you have different environments such as "production", "test" and "dev" databases, and also for migration data between databases.
I will use Entity Framework 4.3 and I want to assign the Guid in the application code, before inserting it in the database. (i.e. I don't want to let SQL generate the Guid).
What is the best practice for creating GUID-based Primary Keys, in order to avoid the supposed performance hits associated with this approach?
GUIDs may seem to be a natural choice for your primary key - and if you really must, you could probably argue to use it for the PRIMARY KEY of the table. What I'd strongly recommend not to do is use the GUID column as the clustering key, which SQL Server does by default, unless you specifically tell it not to.
You really need to keep two issues apart:
the primary key is a logical construct - one of the candidate keys that uniquely and reliably identifies every row in your table. This can be anything, really - an INT, a GUID, a string - pick what makes most sense for your scenario.
the clustering key (the column or columns that define the "clustered index" on the table) - this is a physical storage-related thing, and here, a small, stable, ever-increasing data type is your best pick - INT or BIGINT as your default option.
By default, the primary key on a SQL Server table is also used as the clustering key - but that doesn't need to be that way! I've personally seen massive performance gains when breaking up the previous GUID-based Primary / Clustered Key into two separate key - the primary (logical) key on the GUID, and the clustering (ordering) key on a separate INT IDENTITY(1,1) column.
As Kimberly Tripp - the Queen of Indexing - and others have stated a great many times - a GUID as the clustering key isn't optimal, since due to its randomness, it will lead to massive page and index fragmentation and to generally bad performance.
Yes, I know - there's newsequentialid() in SQL Server 2005 and up - but even that is not truly and fully sequential and thus also suffers from the same problems as the GUID - just a bit less prominently so.
Then there's another issue to consider: the clustering key on a table will be added to each and every entry on each and every non-clustered index on your table as well - thus you really want to make sure it's as small as possible. Typically, an INT with 2+ billion rows should be sufficient for the vast majority of tables - and compared to a GUID as the clustering key, you can save yourself hundreds of megabytes of storage on disk and in server memory.
Quick calculation - using INT vs. GUID as Primary and Clustering Key:
Base Table with 1'000'000 rows (3.8 MB vs. 15.26 MB)
6 nonclustered indexes (22.89 MB vs. 91.55 MB)
TOTAL: 25 MB vs. 106 MB - and that's just on a single table!
Some more food for thought - excellent stuff by Kimberly Tripp - read it, read it again, digest it! It's the SQL Server indexing gospel, really.
GUIDs as PRIMARY KEY and/or clustered key
The clustered index debate continues
Ever-increasing clustering key - the Clustered Index Debate..........again!
Disk space is cheap - that's not the point!
PS: of course, if you're dealing with just a few hundred or a few thousand rows - most of these arguments won't really have much of an impact on you. However: if you get into the tens or hundreds of thousands of rows, or you start counting in millions - then those points become very crucial and very important to understand.
Update: if you want to have your PKGUID column as your primary key (but not your clustering key), and another column MYINT (INT IDENTITY) as your clustering key - use this:
CREATE TABLE dbo.MyTable
(PKGUID UNIQUEIDENTIFIER NOT NULL,
MyINT INT IDENTITY(1,1) NOT NULL,
.... add more columns as needed ...... )
ALTER TABLE dbo.MyTable
ADD CONSTRAINT PK_MyTable
PRIMARY KEY NONCLUSTERED (PKGUID)
CREATE UNIQUE CLUSTERED INDEX CIX_MyTable ON dbo.MyTable(MyINT)
Basically: you just have to explicitly tell the PRIMARY KEY constraint that it's NONCLUSTERED (otherwise it's created as your clustered index, by default) - and then you create a second index that's defined as CLUSTERED
This will work - and it's a valid option if you have an existing system that needs to be "re-engineered" for performance. For a new system, if you start from scratch, and you're not in a replication scenario, then I'd always pick ID INT IDENTITY(1,1) as my clustered primary key - much more efficient than anything else!
I've been using GUIDs as PKs since 2005. In this distributed database world, it is absolutely the best way to merge distributed data. You can fire and forget merge tables without all the worry of ints matching across joined tables. GUIDs joins can be copied without any worry.
This is my setup for using GUIDs:
PK = GUID. GUIDs are indexed similar to strings, so high row tables (over 50 million records) may need table partitioning or other performance techniques. SQL Server is getting extremely efficient, so performance concerns are less and less applicable.
PK Guid is NON-Clustered index. Never cluster index a GUID unless it is NewSequentialID. But even then, a server reboot will cause major breaks in ordering.
Add ClusterID Int to every table. This is your CLUSTERED Index... that orders your table.
Joining on ClusterIDs (int) is more efficient, but I work with 20-30 million record tables, so joining on GUIDs doesn't visibly affect performance. If you want max performance, use the ClusterID concept as your primary key & join on ClusterID.
Here is my Email table...
CREATE TABLE [Core].[Email] (
[EmailID] UNIQUEIDENTIFIER CONSTRAINT [DF_Email_EmailID] DEFAULT (newsequentialid()) NOT NULL,
[EmailAddress] NVARCHAR (50) CONSTRAINT [DF_Email_EmailAddress] DEFAULT ('') NOT NULL,
[CreatedDate] DATETIME CONSTRAINT [DF_Email_CreatedDate] DEFAULT (getutcdate()) NOT NULL,
[ClusterID] INT NOT NULL IDENTITY,
CONSTRAINT [PK_Email] PRIMARY KEY NonCLUSTERED ([EmailID] ASC)
);
GO
CREATE UNIQUE CLUSTERED INDEX [IX_Email_ClusterID] ON [Core].[Email] ([ClusterID])
GO
CREATE UNIQUE NONCLUSTERED INDEX [IX_Email_EmailAddress] ON [Core].[Email] ([EmailAddress] Asc)
I am currently developing an web application with EF Core and here is the pattern I use:
All my classes (tables) have an int PK and FK.
I then have an additional column of type Guid (generated by the C# constructor) with a non clustered index on it.
All the joins of tables within EF are managed through the int keys while all the access from outside (controllers) are done with the Guids.
This solution allows to not show the int keys on URLs but keep the model tidy and fast.
This link says it better than I could and helped in my decision making. I usually opt for an int as a primary key, unless I have a specific need not to and I also let SQL server auto-generate/maintain this field unless I have some specific reason not to. In reality, performance concerns need to be determined based on your specific app. There are many factors at play here including but not limited to expected db size, proper indexing, efficient querying, and more. Although people may disagree, I think in many scenarios you will not notice a difference with either option and you should choose what is more appropriate for your app and what allows you to develop easier, quicker, and more effectively (If you never complete the app what difference does the rest make :).
https://web.archive.org/web/20120812080710/http://databases.aspfaq.com/database/what-should-i-choose-for-my-primary-key.html
P.S. I'm not sure why you would use a Composite PK or what benefit you believe that would give you.
Well, if your data never reach millions of rows, you are good. If you ask me, i never use GUID as database identity column of any type, including PK even if you force me to design with a shotgun at the head.
Using GUID as primary key is a definitive scaling stopper, and a critical one.
I recommend you check database identity and sequence option. Sequence is table independent and may provide a solution for your needs(MS SQL has sequences).
If your tables start reaching some dozens of millions of rows the most, e.g. 50 million you will not be able read/write information at acceptable timings and even standard database index maintenance would turn impossible.
Then you need to use partitioning, and be scalable up to half a billion or even 1-2 billion rows. Adding partitioning on the way is not the easiest thing, all read/write statements must include partition column (full app changes!).
These number of course (50 million and 500 million) are for a light selecting useage. If you need to select information in a complex way and/or have lots of inserts/updates/deletes, those could even be 1-2 millions and 50 millions instead, for a very demanding system. If you also add factors like full recovery model, high availability and no maintenance window, common for modern systems, things become extremely ugly.
Note at this point that 2 billion is int limit that looks bad, but int is 4 times smaller and is a sequential type of data, small size and sequential type are the #1 factor for database scalability. And you can use big int which is just twice smaller but still sequential, sequential is what is really deadly important - even more important than size - when to comes to many millions or few billions of rows.
If GUID is also clustered, things are much worst. Just inserting a new row will be actually stored randomly everywhere in physical position.
Even been just a column, not PK or PK part, just indexing it is trouble. From fragmentation perspective.
Having a guid column is perfectly ok like any varchar column as long as you do not use it as PK part and in general as a key column to join tables. Your database must have its own PK elements, filtering and joining data using them - filtering also by a GUID afterwards is perfectly ok.
Having sequential ID's makes it a LOT easier for a hacker or data miner to compromise your site and data. Keep that in mind when choosing a PK for a website.
If you use GUID as primary key and create clustered index then I suggest use the default of NEWSEQUENTIALID() value for it.
Another reason not to expose an Id in the user interface is that a competitor can see your Id incrementing over a day or other period and so deduce the volume of business you are doing.
Most of the times it should not be used as the primary key for a table because it really hit the performance of the database.
useful links regarding GUID impact on performance and as a primary key.
https://www.sqlskills.com/blogs/kimberly/disk-space-is-cheap/
https://www.sqlskills.com/blogs/kimberly/guids-as-primary-keys-andor-the-clustering-key/
I have an Oracle background, and using "Indexed organized tables" (IOT) for every table sounds unreasonable in Oracle and I never actually seen this. In SQL Server, every database I worked on, has a clustered index on every table, which is the same as IOT (conceptually).
Why is that? Is there any reason for using clustered index everywhere? Seems to me like they would be good only for a handful of cases.
Thanks
A clustered index is not quite the same thing as an index-organised table. With an IOT, every field must participate in the IOT key. A clustered index on SQL Server does not have to be unique, and does not have to be the primary key.
Clustered indexes are widely used on SQL Server, as there is almost always some natural ordering that makes a commonly used query more efficient. IOTs in Oracle carry more baggage, so they aren't quite as useful, although they may well be more useful then they're commonly given credit for.
Historically, really old versions of SQL Server pre 6.5 or 7.0 IIRC did not support row-level locking and could only lock at a table or page level. Often a clustered index would be used to ensure that writes were scattered around the table's physical storage to minimise contention on page locks. However, SQL Server 6 went of support some years ago, so applications with this issue will be restricted to rare legacy systems.
Without a clustered index, your table is organized as a heap. This means that every row that is insert is added at the data page at the end of the table. Also as rows get updated, they get moved to the data page at the end of the table if the data updated is larger than than before.
When it is good to not have a clustered index
If you have a table that needs the fastest possible inserts, but can sacrifice update, and read speed, then not having a clustered index may work for you. One example would be if you had a table that was being used as a queue, for instance, lots of inserts that later just get read and moved to a different table.
Clustered Indexes
Clustered indexes organize the data in your table based on the columns in the clustered index. If you cluster on the wrong thing for instance a uniqueidentifier this can slow things down (see below).
As long as your clustered index is on the value that is most commonly used for searching, and it is unique and increasing they you get some amazing performance benefits out of the clustered index. For instance if you have a table called USERS where you are commonly looking up user data based on USER_ID then clustering on USER_ID would speed up the performance of all of those lookups. This simply reduces the number of data pages that need to be read to get at your data.
If you have too many keys in your clustered index this can slow things down also.
General rules for clustered indexes:
Don't cluster on any varchar columns.
Clustering on INT IDENTITY columns is usually best.
Cluster on what you commonly search on.
Clustering on UniqueIdentifiers
With uniqueidentifiers in an index, they are extremely inefficient because there is no natural sort order. Based on the b-tree structure of the index you end up with extremely fragmented indexes when using uniqueidentifiers. After rebuilding or reorganizing, they are still extremely fragmented. So you end up with a slower index, that ends up being really huge in memory and on disk due to the fragmentation. Also on inserts of the uniqueidentifier you are more likely to end up with a page split on the index thus slowing your insert. Generally uniqueidentifiers are bad news for indexes.
Summary
My recommendation is that every table should have a clustered index on it unless there is a really good reason not to (ie table functioning as a queue).
I wouldn't know why you would prefer a heap over a clustered index most of the time. Using clustering, you get one index of your choice for free. Most of the time this is the primary key (which you probably want to enforce anyway!).
Heaps are mostly for special situations.
We are using Primary Keys in relational databases and in general relation is established via these primary keys. Most people used to name first field as TableID and make it primary key. When you join two ore more tables in your query you will get the fastest result if you use clustered indexes.
Summary: I have a table populated via the following:
insert into the_table (...) select ... from some_other_table
Running the above query with no primary key on the_table is ~15x faster than running it with a primary key, and I don't understand why.
The details: I think this is best explained through code examples.
I have a table:
create table the_table (
a int not null,
b smallint not null,
c tinyint not null
);
If I add a primary key, this insert query is terribly slow:
alter table the_table
add constraint PK_the_table primary key(a, b);
-- Inserting ~880,000 rows
insert into the_table (a,b,c)
select a,b,c from some_view;
Without the primary key, the same insert query is about 15x faster. However, after populating the_table without a primary key, I can add the primary key constraint and that only takes a few seconds. This one really makes no sense to me.
More info:
The estimated execution plan shows 0% total query time spent on the clustered index insert
SQL Server 2008 R2 Developer edition, 10.50.1600
Any ideas?
Actually its not as clear cut as Ryk suggests.
It can actually be faster to add data to a table with an index then in a heap.
Read this arctle - and as far as i am aware its quite well regarded:
http://www.sqlskills.com/blogs/kimberly/post/The-Clustered-Index-Debate-Continues.aspx
Bear in mind its written by SQL Server MVP and a Microsoft Regional Director.
Inserts are faster in a clustered table (but only in the "right" clustered table) than compared to a heap. The primary problem here is that lookups in the IAM/PFS to determine the insert location in a heap are slower than in a clustered table (where insert location is known, defined by the clustered key). Inserts are faster when inserted into a table where order is defined (CL) and where that order is ever-increasing. I have some simple numbers but I'm thinking about creating a much larger/complex scenario and publishing those. Simple/quick tests on a laptop are not always as "exciting".
I think if you create a simple primary key that is clustered and made up of a single auto-incrementing column, then inserts into such a table might be faster. Most likely, a primary key made up of multiple columns may be the cause of slowdown in inserts. When you use a composite key for primary key, then rows inserted may not get added to the end of table but may need to be added somewhere in the middle of existing physical order of rows in table, which adds to the insert time and hence makes the INSERTS slower. So use a single auto-incrementing column as the the primary key value in your case to speed up inserts.
This is a good question, but a pretty crappy question too. Before you ask why an index slows down inserts, do you know what an index is?
If not, I suggest you read up on it. A clustered index is a B-tree, (Balanced tree), so every insert has to .... wait for it.... balance the tree. Hence clustered inserts are slower than inserting on heaps. If you don't know what a heap is, then I suggest stop using SQL Server until you understand basics. Else you are attempting to use a product of which you have no idea what you are doing, and basically driving a truck on the highway, blindfolded, thinking you are riding a bike. Unexpected results...
So when you create a clustered Index after a table is populated, your 'heap' has some statistics to use, and SQL can basically optimise a few things. This process is much more complicated than this, but in some cases you will find that creating a clustered index after the fact could be a lot slower than simply to insert to it. This has all to do with key types, number of columns, types of columns etc. This is unfortunately not a topic that is fit for an answer, this is more a whole course and few books by itself. Looking at your table above, it is a VERY simple table with ~7byte rows. In this instance a create-index after the insert will be faster, but chuck in a few varchar(250)'s etc, and the ballgame changes.
If you didn't know, a clustered index, (if your table has one), IS your table.
Hope this helps.
I'm not a DBA ("Good!", you'll be thinking in a moment.)
I have a table of logging data with these characteristics and usage patterns:
A datetime column for storing log timestamps whose value is ever-increasing and mostly (but only mostly) unique
Frequent-ish inserts (say, a dozen a minute), only at the end of the timestamp range (new data being logged)
Infrequent deletes, in bulk, from the beginning of the timestamp range (old data being cleared)
No updates at all
Frequent-ish selects using the timestamp column as the primary criterion, along with secondary criteria on other columns
Infrequent selects using other columns as the criteria (and not including the timestamp column)
A good amount of data, but nowhere near enough that I'm worried much about storage space
Additionally, there is currently a daily maintenance window during which I could do table optimization.
I frankly don't expect this table to challenge the server it's going to be on even if I mis-index it a bit, but nevertheless it seemed like a good opportunity to ask for some input on SQL Server clustered indexes.
I know that clustered indexes determine the storage of the actual table data (the data is stored in the leaf nodes of the index itself), and that non-clustered indexes are separate pointers into the data. So in query terms, a clustered index is going to be faster than a non-clustered index -- once we've found the index value, the data is right there. There are costs on insert and delete (and of course an update changing the clustered index column's value would be particularly costly).
But I read in this answer that deletes leave gaps that don't get cleaned up until/unless the index is rebuilt.
All of this suggests to me that I should:
Put a clustered index on the timestamp column with a 100% fill-factor
Put non-clustered indexes on any other column that may be used as a criterion in a query that doesn't also involve the clustered column (which may be any of them in my case)
Schedule the bulk deletes to occur during the daily maintenance interval
Schedule a rebuild of the clustered index to occur immediately after the bulk delete
Relax and get out more
Am I wildly off base there? Do I need to frequently rebuild the index like that to avoid lots of wasted space? Are there other obvious (to a DBA) things I should be doing?
Thanks in advance.
Contrary to what a lot of people believe, having a good clustered index on a table can actually make operations like INSERTs faster - yes, faster!
Check out the seminal blog post The Clustered Index Debate Continues.... by Kimberly Tripp - the ultimate indexing queen.
She mentions (about in the middle of the article):
Inserts are faster in a clustered
table (but only in the "right"
clustered table) than compared to a
heap. The primary problem here is that
lookups in the IAM/PFS to determine
the insert location in a heap are
slower than in a clustered table
(where insert location is known,
defined by the clustered key). Inserts
are faster when inserted into a table
where order is defined (CL) and where
that order is ever-increasing.
The crucial point is: only with the right clustered index will you be able to reap the benefits - when a clustered index is unique, narrow, stable and optimally ever-increasing. This is best served with an INT IDENTITY column.
Kimberly Tripp also has a great article on how to pick the best possible clustering key for your tables, and what criteria it should fulfil - see her post entitled Ever-increasing clustering key - the Clustered Index Debate..........again!
If you have such a column - e.g. a surrogate primary key - use that for your clustering key and you should see very nice performance on your table - even on lots of INSERTs.
I agree with putting the clustered index on the timestamp column. My query would be on the fillfactor - 100% gives best read performance at the expense of write performance. you may be hurt by page splits. Choosing a lower fillfactor will delay page splitting at the expense of read performance so its a fine balancing act to get the best for your situation.
After the bulk deletes its worth rebuilding the indexes and updating statistics. This not only keeps performance up but also resets the indexes to the specified fillfactor.
Finally, yes put nonclustered indexes on other appropriate columns but only ones that are very select e.g not bit fields. But remember the more indexes, the more this affects write performance
There's two "best practice" ways to index a high traffic logging table:
an integer identity column as a primary clustered key
a uniqueidentifier colum as primary key, with DEFAULT NEWSEQUENTIALID()
Both methods allow SQL Server to grow the table efficiently, because it knows that the index tree will grow in a particular direction.
I would not put any other indexes on the table, or schedule rebuilds of the index, unless there is a specific performance issue.
The obvious answer is it depends on how you will query it. The point of the index is to lessen the quantity of compares when selecting data. The clustered index helps when you consider what data you will load together and the blocking factor of the storage (you can load a bunch of data in a 64k block with one read). If you include an ID and a datetime as the primary key, but not use them in your selection criteria, they will do nothing but hinder your performance. This is why people usually drop indexes upon bulk inserts before loading data.