DB advice needed for performance of a 'SessionVisit' table - sql-server

I have a 'SessionVisit' table which collects data about user visits.
The script for this table is below. There may be 25,000 rows added a day.
The table CREATE statement is below. My database knowledge is definitely not up to scratch as far as understanding the implications of such a schema.
Can anyone give me their 2c of advice on some of these issues :
Do I need to worry about ROWSIZE for this schema for SQL Server 2008. I'm not even sure how the 8kb rowsize works in 2008. I don't even know if I'm wasting a lot of space if I'm not using all 8kb?
How should I purge old records I don't want. Will new rows fill in the empty spaces from dropped rows?
Any advice on indexes
I know this is quite general in nature. Any 'obvious' or non obvious info would be appreciated.
Here's the table :
USE [MyDatabase]
GO
/****** Object: Table [dbo].[SessionVisit] Script Date: 06/06/2009 16:55:05 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[SessionVisit](
[SessionGUID] [uniqueidentifier] NOT NULL,
[SessionVisitId] [int] IDENTITY(1,1) NOT NULL,
[timestamp] [timestamp] NOT NULL,
[SessionDate] [datetime] NOT NULL CONSTRAINT [DF_SessionVisit_SessionDate] DEFAULT (getdate()),
[UserGUID] [uniqueidentifier] NOT NULL,
[CumulativeVisitCount] [int] NOT NULL CONSTRAINT [DF_SessionVisit_CumulativeVisitCount] DEFAULT ((0)),
[SiteUserId] [int] NULL,
[FullEntryURL] [varchar](255) NULL,
[SiteCanonicalURL] [varchar](100) NULL,
[StoreCanonicalURL] [varchar](100) NULL,
[CampaignId] [int] NULL,
[CampaignKey] [varchar](50) NULL,
[AdKeyword] [varchar](50) NULL,
[PartnerABVersion] [varchar](10) NULL,
[ABVersion] [varchar](10) NULL,
[UserAgent] [varchar](255) NULL,
[Referer] [varchar](255) NULL,
[KnownRefererId] [int] NULL,
[HostAddress] [varchar](20) NULL,
[HostName] [varchar](100) NULL,
[Language] [varchar](50) NULL,
[SessionLog] [xml] NULL,
[OrderDate] [datetime] NULL,
[OrderId] [varchar](50) NULL,
[utmcc] [varchar](1024) NULL,
[TestSession] [bit] NOT NULL CONSTRAINT [DF_SessionVisit_TestSession] DEFAULT ((0)),
[Bot] [bit] NULL,
CONSTRAINT [PK_SessionVisit] PRIMARY KEY CLUSTERED
(
[SessionGUID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
ALTER TABLE [dbo].[SessionVisit] WITH CHECK ADD CONSTRAINT [FK_SessionVisit_KnownReferer] FOREIGN KEY([KnownRefererId])
REFERENCES [dbo].[KnownReferer] ([KnownRefererId])
GO
ALTER TABLE [dbo].[SessionVisit] CHECK CONSTRAINT [FK_SessionVisit_KnownReferer]
GO
ALTER TABLE [dbo].[SessionVisit] WITH CHECK ADD CONSTRAINT [FK_SessionVisit_SiteUser] FOREIGN KEY([SiteUserId])
REFERENCES [dbo].[SiteUser] ([SiteUserId])
GO
ALTER TABLE [dbo].[SessionVisit] CHECK CONSTRAINT [FK_SessionVisit_SiteUser]

I see SessionGUID and SessionVisitId, why have both a uniqueidentifier and an Identity(1,1) on the same table? Seems redundant to me.
I see referer and knownrefererid, think about getting the referer from the knownrefererid if possible. This will help reduce excess writes.
I see campaignkey and campaignid, again if possible get from the campaigns table if possible.
I see orderid and orderdate. I'm sure you can get the order date from the orders table, correct?
I see hostaddress and hostname, do you really need the name? Usually the hostname doesn't serve much purpose and can be easily misleading.
I see multiple dates and timestamps, is any of this duplicate?
How about that SessionLog column? I see that it's XML. Is it a lot of data, is it data you may already have in other columns? If so get rid of the XML or the duplicated columns. Using SQL 2008 you can parse data out of that XML column when reporting and possibly eliminate a few extra columns (thus writes). Are you going to be in trouble in the future when developers add more to that XML? XML to me just screams 'a lot of excessive writing'.
Mitch says to remove the primary key. Personally I would leave the index on the table. Since it is clustered that will help speed up write times as the DB will always write new rows at the end of the table on the disk.
Strip out some of this duplicate information and you'll probably do just fine writing a row each visit.

Well, I'd recommend NOT inserting a few k of data with EVERY page!
First thing I'd do would be to see how much of this information I could get from a 3rd party analytics tool, perhaps combined with log analysis. That should allow you to drop a lot of the fields.
25k inserts a days isn't much, but the catch here is that busier your site gets, the more load this is going to put on the db. Perhaps you could build a queuing system that batches the writes, but really, most of this information is already in the logs.

Agre with Chris that you would probably be better off using log analysis (check out Microsoft's free Log Parser)
Failing that, I would remove the Foreign Key constraints from your SessionVisit table.
You mentioned rowsize; the varchar's in your table do not pre-allocate to their maximum length (more 4 + 4 bytes for an empty field (approx.)). But saying that, a general rule is to keep rows as 'lean' as possible.
Also, I would remove the primary key from the SessionGUID (GUID) column. It won't help you much.

That's also an awful lot of nulls in that table. I think you should group together the columns that must be non-null at the same time. In fact, you should do a better analysis of the data you're writing, rather than lumping it all together in a single table.

Related

MS PowerApps: How to "Patch" a SQL Table with Composite Primary Key

I am relatively new to MS PowerApps
I have a SQL Server Express installed on a onsite server with a Gateway for PowerApps
My SQL Server table has a composite primary key, it is defined as:
CREATE TABLE [GFX_Information].[BusinessParnterAccess]
(
[BpAccesID] [int] IDENTITY(1,1) NOT NULL,
[CreatedDate] [datetime] NOT NULL,
[UpdatedDate] [datetime] NOT NULL,
[LastOperatorID] [int] NOT NULL,
[CreateByID] [int] NOT NULL,
[BPID] [int] NOT NULL,
[AllowedOperatorID] [int] NOT NULL,
[AccessFlag] [varchar](10) NULL,
PRIMARY KEY CLUSTERED ([AllowedOperatorID] ASC, [BPID] ASC)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [GFX_Information].[BusinessParnterAccess]
ADD DEFAULT (GETDATE()) FOR [CreatedDate]
GO
ALTER TABLE [GFX_Information].[BusinessParnterAccess]
ADD DEFAULT (GETDATE()) FOR [UpdatedDate]
GO
I am trying to work out how to "Patch" a new record.
Currently, using the OnVisible event I create a variable to hold the last BpAccesID like this
UpdateContext ({varLastAccessID:First(SortByColumns('[GFX_Information].[BusinessParnterAccess]',"BpAccesID",Descending)).BpAccesID});
I am using a manual set of values for the Patch Command for testing purposes. The Patch command is
Patch('[GFX_Information].[BusinessParnterAccess]',Defaults('[GFX_Information].[BusinessParnterAccess]')
,{BpAccesID:varLastAccessID+1
,CreatedDate: Now()
,UpdatedDate:Now()
,LastOperatorID:4
,CreateByID:4
,BPID:342
,AllowedOperatorID:4
,AccessFlag:"RW" });
However, this does not throw an error I can detect nor can I see what I am missing
Can any one provide any ideas please?
I was reading this, and this is a suggestion is based on my knowledge of SQL Server and a quick read about Patch. It may help you, or may not (I'm sorry). And also just confirming: I'm guessing that the question is "this doesn't create a new row and I cannot see why?"
I would guess that your issue is with BPAccessId. You've set it as an identity: [BpAccesID] [int] IDENTITY(1,1) NOT NULL,
However, you explicitly insert a value into it
Patch('[GFX_Information].[BusinessParnterAccess]',Defaults('[GFX_Information].[BusinessParnterAccess]')
,{BpAccesID:varLastAccessID+1
Of course, you usually cannot insert into an IDENTITY column in SQL Server - you need to set IDENTIY_INSERT on (then off again after you finish). Also, as an aside, one of the reasons for IDENTITY PK columns is to always create a new row with a valid PK. How does the approach above work for concurrency e.g., two users trying to create a new row at the same time?
Anyway, some potential solutions off the top of my head. Once again, this is based off my knowledge of SQL Server only.
Alter the MS Powerapps statement to work with the IDENTITY (I'll leave this up to you) - whether the equivalent of SET IDENTITY_INSERT table ON; or otherwise
Remove the IDENTITY property from BPAccessID (e.g., leave it as a pure int)
Make the Primary Key a composite of all three columns e.g., AllowedOperatorID, BPID, BPAccessID
Make BPAccessID the Primary Key but non-clustered, and make a unique clustered index for AllowedOperatorID, BPID
For the bottom two, as BPAccessID is still an IDENTITY, you'll need to let SQL Server handle calculating the new value.
If you are not using foreign keys to this table, then the bottom two will have similar effects.
However, if there are foreign keys, then the bottom one (a non-clustered PK and clustered unique index on the other two) is probably the closest to your current setup (and is actually what I would typically do in a table structure like yours, regardless of PowerApps or other processing).

Slow on Retrieving data from 38GB SQL Table

I am looking for some advise. I have a SQL Server table called AuditLog and this table records any action/changes that happens to our DB from our web application.
I am trying to build some reports and anytime I try to pull data from this table it makes my query run from seconds to 10mins+. Just doing a
select * from dbo.auditlog
takes about 2hours+.
The table has 77 million rows and is growing. Anyhow, only thoughts at this moment is to do an index but that would slow down inserts. Not sure how much that would affect performance but have held back on it. Other thoughts were to partition the table or do an index view but we are running SQL Server 2014 Standard Edition and those options are not supported.
Here is the table create statement:
CREATE TABLE [dbo].[AuditLog]
(
[AuditLogId] [uniqueidentifier] NOT NULL,
[UserId] [uniqueidentifier] NULL,
[EventDateUtc] [datetime] NOT NULL,
[EventType] [char](1) NOT NULL,
[TableName] [nvarchar](100) NOT NULL,
[RecordId] [nvarchar](100) NOT NULL,
[ColumnName] [nvarchar](100) NOT NULL,
[OriginalValue] [nvarchar](max) NULL,
[NewValue] [nvarchar](max) NULL,
[Rams1RecordID] [uniqueidentifier] NULL,
[Rams1AuditHistoryID] [uniqueidentifier] NULL,
[Rams1UserID] [uniqueidentifier] NULL,
[CreatedBy] [uniqueidentifier] NULL,
[CreatedDate] [datetime] NULL DEFAULT (getdate()),
[OriginalValueNiceName] [nvarchar](100) NULL,
[NewValueNiceName] [nvarchar](100) NULL,
CONSTRAINT [PK_AuditLog]
PRIMARY KEY CLUSTERED ([TableName] ASC, [RecordId] ASC, [AuditLogId] ASC)
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
ALTER TABLE [dbo].[AuditLog] WITH NOCHECK
ADD CONSTRAINT [FK_AuditLog_User]
FOREIGN KEY([UserId]) REFERENCES [dbo].[User] ([UserID])
GO
ALTER TABLE [dbo].[AuditLog] CHECK CONSTRAINT [FK_AuditLog_User]
GO
ALTER TABLE [dbo].[AuditLog] WITH NOCHECK
ADD CONSTRAINT [FK_AuditLog_UserCreatedBy]
FOREIGN KEY([CreatedBy]) REFERENCES [dbo].[User] ([UserID])
GO
ALTER TABLE [dbo].[AuditLog] CHECK CONSTRAINT [FK_AuditLog_UserCreatedBy]
GO
With something that big there are a couple of things you might try.
The first thing you need to do is define how you accessing the table MOST of the time and index accordingly.
I would hope you are not do a select * from AuditLog without any filtering for a reporting solution - it shouldn't even be an option.
Finally, instead of indexed views or partitioning, you might consider a partitioned view.
A partitioned view is basically breaking your table up, physically into smaller meaningful tables - based on date or type or object or however you are MOST often accessing it. Each table is then indexed separately giving you much better stats and if you in 2012 or higher you can take advantage of ColumnStore, assuming you use something like a DATE to group the data.
Create a view that spans all of the tables and then report based on the view. Since you already grouped your data by how you MOST often will access it, your filter will act similarly to partition exclusion and get you to your data faster.
Of course this will result in a little more maintenance and some code change, but be well worth the effort if you are storing that much data and more in a single table.

Why SQL Server occasionally decides not to use this index?

I have a complex problem with SQL Server.
I administer 40 databases with identical structure but different data. Those database sizes vary from 2 MB to 10 GB of data. The main table for these databases is:
CREATE TABLE [dbo].[Eventos](
[ID_Evento] [int] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL,
[FechaGPS] [datetime] NOT NULL,
[FechaRecepcion] [datetime] NOT NULL,
[CodigoUnico] [varchar](30) COLLATE Modern_Spanish_CI_AS NULL,
[ID_Movil] [int] NULL,
[CodigoEvento] [char](5) COLLATE Modern_Spanish_CI_AS NULL,
[EventoData] [varchar](150) COLLATE Modern_Spanish_CI_AS NULL,
[EventoAlarma] [bit] NOT NULL CONSTRAINT [DF_Table_1_Alarma] DEFAULT ((0)),
[Ack] [bit] NOT NULL CONSTRAINT [DF_Eventos_Ack] DEFAULT ((0)),
[Procesado] [bit] NOT NULL CONSTRAINT [DF_Eventos_Procesado] DEFAULT ((0)),
[Latitud] [float] NULL,
[Longitud] [float] NULL,
[Velocidad] [float] NULL,
[Rumbo] [smallint] NULL,
[Satelites] [tinyint] NULL,
[EventoCerca] [bit] NOT NULL CONSTRAINT [DF_Eventos_FueraCerca] DEFAULT ((0)),
[ID_CercaElectronica] [int] NULL,
[Direccion] [varchar](250) COLLATE Modern_Spanish_CI_AS NULL,
[Localidad] [varchar](150) COLLATE Modern_Spanish_CI_AS NULL,
[Provincia] [varchar](100) COLLATE Modern_Spanish_CI_AS NULL,
[Pais] [varchar](50) COLLATE Modern_Spanish_CI_AS NULL,
[EstadoEntradas] [char](16) COLLATE Modern_Spanish_CI_AS NULL,
[DentroFuera] [char](1) COLLATE Modern_Spanish_CI_AS NULL,
[Enviado] [bit] NOT NULL CONSTRAINT [DF_Eventos_Enviado] DEFAULT ((0)),
[SeñalGSM] [int] NOT NULL DEFAULT ((0)),
[GeoCode] [bit] NOT NULL CONSTRAINT [DF_Eventos_GeoCode] DEFAULT ((0)),
[Contacto] [bit] NOT NULL CONSTRAINT [DF_Eventos_Contacto] DEFAULT ((0)),
CONSTRAINT [PK_Eventos] PRIMARY KEY CLUSTERED
(
[ID_Evento] ASC
)WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
USE [ABS]
GO
ALTER TABLE [dbo].[Eventos] WITH CHECK ADD CONSTRAINT [FK_Eventos_Eventos] FOREIGN KEY([ID_Evento])
REFERENCES [dbo].[Eventos] ([ID_Evento])
I also have a cycle that runs every n seconds to process these records (only new ones and mark them as processed). This process uses this query:
SELECT
Tbl.ID_Cliente, Ev.ID_Evento, Tbl.ID_Movil, Ev.EventoData, Tbl.Evento,
Tbl.ID_CercaElectronica, Ev.Latitud, Ev.Longitud, Tbl.EsAlarma, Ev.FechaGPS,
Tbl.AlarmaVelocidad, Ev.Velocidad, Ev.CodigoEvento
FROM
dbo.Eventos AS Ev
INNER JOIN
(SELECT
Det.CodigoEvento, Mov.CodigoUnico, Mov.ID_Cliente, Mov.ID_Movil, Det.Evento,
Mov.ID_CercaElectronica, Det.EsAlarma, Mov.AlarmaVelocidad
FROM
dbo.Moviles Mov
INNER JOIN
dbo.GruposEventos AS GE
INNER JOIN
dbo.GruposEventosDet AS Det ON Det.ID_GrupoEventos = GE.ID_GrupoEventos
ON GE.ID_GrupoEventos = Mov.ID_GrupoEventos) as Tbl ON EV.CodigoUnico = Tbl.CodigoUnico AND Ev.CodigoEvento = Tbl.CodigoEvento
WHERE
(Ev.Procesado = 0)
The table can have on some databases more than 1.000.000 records. So to optimize the process I created this index specific for this query using SQL assistant for optimization:
CREATE NONCLUSTERED INDEX [OptimizadorProcesarEventos] ON [dbo].[Eventos]
(
[Procesado] ASC,
[CodigoEvento] ASC,
[CodigoUnico] ASC,
[FechaGPS] ASC
)
INCLUDE ( [ID_Evento],
[EventoData],
[Latitud],
[Longitud],
[Velocidad]) WITH (SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF) ON [PRIMARY]
This used to work perfect. But now occasionally and only in some databases, the query takes forever and gives me timeout. So I run a "show execution plan" and realize that in some scenarios depending on the data from the table, SQL Server decides not to use my index and use a PK Index instead. I verify this running the same execution plan on other db that works fine and the index is being use.
So my question: why does SQL Server on some occasions decide not to use my index?
Thank you for your interest!
UPDATE
I already try to UPDATE STATICS and didn´t help. I preffer to avoid the use of HINT for now, so the question remains: Why SQL Server choose a more inefficient way to execute my query if has an index for it?
UPDATE II
After many test, I could finaly resolve the problem, even though i don't quite undestand why this worked. I change the index to this:
CREATE NONCLUSTERED INDEX [OptimizadorProcesarEventos] ON [dbo].[Eventos]
(
[CodigoUnico] ASC,
[CodigoEvento] ASC,
[Procesado] ASC,
[FechaGPS] ASC
)
INCLUDE ( [ID_Evento],
[EventoData],
[Latitud],
[Longitud],
[Velocidad]) WITH (SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF) ON [PRIMARY]
Basicaly i change the order of the fields in the index and the query inmediatly start to use the index as espected. I'ts still a mistery for me how SQL Server choose to use or not to use indexes on specific query. Thanks to everyone.
you must have find lot of articles on how Query optimizer chooses the Right Index. if not search something on google.
I can point out one to start with.
Index Selection and the Query Optimizer
The simple answer is as follows:
"Based on the index usage history, statistics, number of rows inserted/updated/deleted etc.... Query optimizer has find out that using the PK index is less costly than using the other Non Clustered index."
now you will have lot of questions around how did Query Optimizer finds that out? and that will require some home work.
though in your specific situation, I am not agree with "Femi" as mentioned to try and running "Update Statistics" because there are some other situations as well where Update Statistics will also not help.
It sound like you have tested this Index on this query and if you are sure that you want only this index to be used 100% of time by that query, use the query hint and specify this index needs to be used. by that way you can always sure that this index will be used.
CAUTION: you must have done more than enough testing on various data loads to make sure in no case using this index is not expected or not acceptable. Once you use the Query hints every execution will use that only and Optimizer will always come up with execution plan using that Index.
Its difficult to tell in this specific case, but very often the query planner will look at the statistics it has for the specific table and decide to use the wrong index (for some definition of wrong; probably just not the index you think it should use). Try running UPDATE STATISTICS on the table and see if the query planner arrives at a different set of decisions.
Determining why the optimizer does or doesn't choose a given index can be somewhat of a dark art. I do notice, however, that there's likely a better index that you could be using. Specifically:
CREATE NONCLUSTERED INDEX [OptimizadorProcesarEventos] ON [dbo].[Eventos]
(
[Procesado] ASC,
[CodigoEvento] ASC,
[CodigoUnico] ASC,
[FechaGPS] ASC
)
INCLUDE ( [ID_Evento],
[EventoData],
[Latitud],
[Longitud],
[Velocidad])
WHERE Procesado = 0 -- this makes it a filtered index
WITH (SORT_IN_TEMPDB = OFF,
DROP_EXISTING = OFF,
IGNORE_DUP_KEY = OFF,
ONLINE = OFF)
ON [PRIMARY]
This goes on my assumption that at any given time, most of the rows in your table are processed (i.e. Procesado = 1) so the above index would be much smaller than the non-filtered version.

Dropping Azure Schema or Deleting Rows takes a very long time

Vague title I know.
I have, at the moment, 16,000 rows in my database. This was created just while in development, I want to now delete all these rows so I can start again (so I don't have duplicate data).
The database is on SQL Azure.
If I run a select query
SELECT [Guid]
,[IssueNumber]
,[Severity]
,[PainIndex]
,[Status]
,[Month]
,[Year]
,[DateCreated]
,[Region]
,[IncidentStart]
,[IncidentEnd]
,[SRCount]
,[AggravatingFactors]
,[AggravatingFactorDescription]
FROM [dbo].[WeeklyGSFEntity]
GO
This returns all the rows, and SSMS says this takes 49 seconds.
If I attempt to drop the table, this goes on for 5 minutes plus.
DROP TABLE [dbo].[WeeklyGSFEntity]
GO
/****** Object: Table [dbo].[WeeklyGSFEntity] Script Date: 10/01/2013 09:46:18 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[WeeklyGSFEntity](
[Guid] [uniqueidentifier] NOT NULL,
[IssueNumber] [int] NULL,
[Severity] [int] NULL,
[PainIndex] [nchar](1) NULL,
[Status] [nvarchar](255) NULL,
[Month] [int] NULL,
[Year] [int] NULL,
[DateCreated] [datetime] NULL,
[Region] [nvarchar](255) NULL,
[IncidentStart] [datetime] NULL,
[IncidentEnd] [datetime] NULL,
[SRCount] [int] NULL,
[AggravatingFactors] [nvarchar](255) NULL,
[AggravatingFactorDescription] [nvarchar](max) NULL,
PRIMARY KEY CLUSTERED
(
[Guid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
)
GO
If I attempt to delete each row, this also takes 5 minutes plus.
DELETE
FROM [dbo].[WeeklyGSFEntity]
GO
Am I doing something wrong or is it just that this is big data and I'm being impatient?
UPDATE:
Dropping the entire database took some 25 seconds.
Importing 22,000 rows (roughly the same 16,000 plus more) into localdb\v11.0 took 6 seconds. I know this is local but surely the local dev server is slower than Azure? Surely...
UPDATE the second:
Recreating the database and recreating the schema (with (Fluent) NHibernate), and then inserting some 20,000 rows took 2 minutes 6 seconds. All Unit Tests pass.
Is there anything I can do to look back?
Dropping and recreating the database sped things up considerably.
The reason for this is unknown.
Possible there is an open transaction on causing a lock on the table. This could be caused by cancelling an operation half way like we all do during dev.
Do a sp_who2 and see which Id is in blkby column. If there is one that's it.
To kill that process do kill id

Identity column without index or unique constraint

I just experienced a database breakdown due to sudden extradordinary data loading from disk.
I found the issue would arise when I attempted inserting into a log table with approx. 3.5 million rows. The table features an ID column set to IDENTITY, but with no indexes or unique constraints.
CREATE TABLE [dbo].[IntegrationTestLog](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Ident] [varchar](50) NULL,
[Date] [datetime] NOT NULL,
[Thread] [varchar](255) NOT NULL,
[Level] [varchar](50) NOT NULL,
[Logger] [varchar](255) NOT NULL,
[Message] [varchar](max) NOT NULL,
[Exception] [varchar](max) NULL
)
Issue triggered by this line:
INSERT INTO IntegrationTestLog ([Ident],[Date],[Thread],[Level],[Logger],[Message],[Exception]) VALUES (#Ident, #log_date, #thread, #log_level, #logger, #message, #exception)
There are possibly many other queries that will trigger it, but this one I know for sure.
Bear with me, cuz' Im only guessing now, but does the identity seeding process somehow slow down if an index is missing? Could it by any slight chance fall back to doing a MAX(ID) query to get the latest entry? (Probably not). I haven't succeeded in finding any deep technical information about the subject yet. Please share if you know some litterature or links to such.
To solve the issue, we ended up truncating the table, which itself took VERY long. I also promoted ID to be primary key.
Then I read this article: Identity columns and found that truncate actually does touch the identity seed.
A truncate table (but not delete) will update the current seed to the
original seed value.
...which again only led me to be more suspecious of the identity seed.
Again I'm searching in the dark - please enlighten me on this issue if you have the insight.

Resources