Came across this error today. Wondering if anyone can tell me what it means:
Cannot sort a row of size 9522, which is greater than the allowable maximum of 8094.
Is that 8094 bytes? Characters? Fields? Is this a problem joining multiple tables that are exceeding some limit?
In SQL 2000, the row limit is 8K bytes, which is the same size as a page in memory.
[Edit]
In 2005, the page size is the same (8K), but the database uses pointers on the row in the page to point to other pages that contain larger fields. This allows 2005 to overcome the 8K row size limitation.
The problem that seems to catch a lot of people, is that you can create a table that by definition would hold more than 8K of data, and it will accept it just fine. And the table will work fine, up until the point you actually try to insert more than 8K of data into the table.
So, let's say you create a table with an integer field, for the primary key, and 10 varchar(1000) fields. The table would work fine most of the time, as the number of times you would fill up all 10 of your varchar(1000) fields would be very few. Howerver, in the even that you tried to put 1000 characters in each of your fields, it would give you the error mentioned in this question.
FYI, running this SQL command on your DB can fix the problem if it is caused by space that needs to be reclaimed after dropping variable length columns:
DBCC CLEANTABLE (0,[dbo.TableName])
See: http://msdn.microsoft.com/en-us/library/ms174418.aspx
That used to be a problem in SQL 2000, but I thought that was fixed in 2005.
8094 bytes.
If you list some more information about what you are doing it might help us to figure out the actual cause.
Related
I have a table with 102 columns and 43200 rows. Id column is an identity column and 2 columns have an unique index.
When I just execute
Select *
from MyTable
it takes almost 8 minutes+ over the network.
This table has a Status column which contains 1 or 0. If I select with where Status = 1, then I'm getting 31565 rows and the select is taking 6 minutes+. For your information status 1 completed and will not change ever anymore. But 0 status is working in progress and the rows are changing different columns value by different user stage.
When I select with Status = 0, it takes 1.43 minutes and returns 11568 rows.
How can I increase performance for completed and WIP status query separately or cumulatively? Can I somehow use caching?
The SQL server takes care of caching. At least as long as there is enough free RAM. When it take so long to get the data at first you need to find the bottleneck.
RAM: Is there enough to hold the full table? And is the SQL server configured to use it?
Is there an upper limit to RAM usage? If not SQL server assumes unlimited RAM and this will often end caching in page file, which causes massive slow downs
You said "8+ minutes through network". How long does it take on local execution? Maybe the network is slow
Hard drive: When the table is too big to be held in RAM it gets read from hard drive. HDDs are somewhat slow. Maybe defragmenting the indices could help here (at least somewhat)
If none helps, the SQL profiler might help to show you where the bottleneck actually is to find
This is an interesting question, but it's a little open-ended, more info is needed. I totally agree with allmhuran's comment that maybe you shouldn't be using "select * ..." for a large table. (It could in fact be posted as an answer, it deserves upvotes).
I suspect there may be design issues - Are you using BLOB's? Is the data at least partially normalized? ref https://en.wikipedia.org/wiki/Database_normalization
I Suggest create a non clustered index on "Status" Column. It improves your queries with Where Clause that uses this column.
We are getting this error on a table in our database:
Cannot create a row of size 8937 which is greater than the allowable
maximum of 8060.
The table consists of about 400 varchar(max) fields. We are, however, only inserting empty strings into these fields.
The insert seems to work, however when using SqlXml to read the data or when running DBCC DBREINDEX on the primary key of the table, the error occurs.
It is only occurring on one particular SQL Server (2005) and not on others (2005 Express). The problem machine is running 64-bit Windows and the others are running 32-bit windows.
Has anyone got any ideas about this? Please let me know if I need to include any more information.
I'd like to point out that I completely agree that it is rather extreme, unusual and not at all sensible to be attempting to use this many varchar(max) columns. There are reasons for it, mainly not under my control, that I will not go into here.
The error is caused because you cannot have a row in SQL server which is larger than 8KB (the size of 1 page) because rows are not allowed to span pages - its a basic limit of SQL Server, you can read more about it here:
Database Basics Quick Note - The difference in Varchar and Nvarchar data types
Note that SQL server will allow you to create the table, however if you try to actually insert any data which spans multiple pages then it will give the above error.
Of course this doesn't quite add up, because if the above was the whole truth then single VARCHAR(8000) column would fill a row in a table! (This used to be the case). SQL Server 2005 got around this limitation by allowing certain data from a row to be stored in another page, and instead leaving a 24-byte pointer instead. You can read about this here:
How Sql Server 2005 bypasses the 8KB row size limitation
Maximum Row Size in SQL Server 2005 to the Limit
As you can see this now means that rows can now span multiple pages, however single column rows still need to fit into a single page (hence the maximum size of a column being VARCHAR(8000)) and there is still a limit on the total number of such columns you can have (around 8000 / 24 = ~300 by my estimate)
Of course this is all missing the main point, which is that 400 wide columns on a single table is absurd!!!
You should take a long hard look at your database schema and come up with something more reasonable - you could start with choosing some more conservative estimates on column sizes (like VARCHAR(255) or VARCHAR(50)), but you really need to split some of those fields out into separate tables.
You might have a deleted column in the table that still takes up space. Also check the "text in row" settings are the same.
The row size is determined by the types of the columns, not how much data you store in them.
Have 400 varchar fields in a single table tells me you are doing something wrong. Perhaps you need to normalize the schema?
I have this problem today. but this table's column is small and only hundreds rows. and this table work good before. I cost many time to research.
finally, I backup this table data. I delete the table. and create same new table; then restore data. everything is ok.
It shows the old table maybe crashed inside. maybe SQL server bug. anyway, issue is gone. hope save others time.
backgroud: MS SQL 2012
Can someone explain some behaviour I'm seeing in SQL Server 2005?
I've been tasked with reducing the size or our DB.
The table contains nearly 6 million records, and I calculated the row size as being 1990 bytes. I took a copy of the table, and reduced the row size down to 803 bytes, through various techniques.
When I compare the original table's Data Size (right-click properties or sp_spaceused) with the new table I'm seeing saving of just 21.7 MB. This is nowhere near what I was expecting.
Here is how I calculated the row-size:
If the column was numeric/decimal then I used the MSDN size (http://msdn.microsoft.com/en-us/library/ms187746.aspx), for everything else I used syscolumns.length. If the column was nullable I added an extra byte.
Here are some of the changes I implemented.
Turned unnecessary nvarchars into varchars
Made columns NOT NULL
Reduced max length of varchar columns to suit actual data
Removed some unused columns
Couple of datetime into smalldatetime
Turned some decimals into ints.
Merged 16 nullable BIT columns into a bit masked int.
From this, my calculations showed a 60% row size reduction and against a 6M row table I would have expected more than 21MB of saving. It went down from 2,762,536 KB to 2,740,816 KB.
Can someone please explain this behaviour to me?
p.s. This does not take into account any indexes.
The problem is that altering a table does not reclaim any space. Dropping a column is logical only, the column is hidden, not deleted. Modifying a column type will often result adding a new column and hiding the previous one. All these operations increase the physical size of the table. To reclaim the space 'for real' you need to rebuild the table. With SQL 2008 and up you would issue an ALTER TABLE ... REBUILD. In SQL 2005 you can issue DBCC REINDEX(table).
I think you need to rebuild the clustered index on the table.
How can I get the size in bytes of a table returned by a SQL query in SSMs?
Are you looking for the size of the table, or the size of a row in the table? The latter is only readily available if all your columns are of fixed size, i.e. nchar and not nvarchar etc.
With var sized columns you can use the maximum length of each column, and sum these, to give you a maximum row size, but this really won't accurately reflect your real row sizes.
select sum(max_length)
from sys.columns
where object_id = object_id('MyTable')
You might also create a query that returns DATALENGTH for each column in any particular row to get the total size of only that row.
SQL queries don't return tables, they return results. There is no API to determine the size of a result because results have streaming semantics, you start reading the result until the end and you cannot know the size upfront. Sending the size upfront would require the server to first get the result, store it somewhere, determine its size (number of rows), and then send the size followed by result. Obviously, this is inefficient and completely undesirable. It is much better to start streaming the result as soon as available w/o having to store it intermediately.
Perhaps you're looking for something else?
The size of a table in the database can always be determined from its number of pages, see sys.allocation_units. The helper procedure sp_spaceused can read and format this information for you.
In SSMS only, you can "include client statistics" from one of the menus which gives some information
Otherwise, as per Remus' answer
I have a table on SQL Server 2005 that was about 4gb in size.
(about 17 million records)
I changed one of the fields from datatype char(30) to char(60) (there are in total 25 fields most of which are char(10) so the amount of char space adds up to about 300)
This caused the table to double in size (over 9gb)
I then changed the char(60) to varchar(60) and then ran a function to cut extra whitespace out of the data (so as to reduce the average length of the data in the field to about 15)
This did not reduce the table size. Shrinking the database did not help either.
Short of actually recreating the table structure and copying the data over (that's 17 million records!) is there a less drastic way of getting the size back down again?
You have not cleaned or compacted any data, even with a "shrink database".
DBCC CLEANTABLE
Reclaims space from dropped variable-length columns in tables or indexed views.
However, a simple index rebuild if there is a clustered index should also do it
ALTER INDEX ALL ON dbo.Mytable REBUILD
A worked example from Tony Rogerson
Well it's clear you're not getting any space back ! :-)
When you changed your text fields to CHAR(60), they are all filled up to capacity with spaces. So ALL your fields are now really 60 characters long.
Changing that back to VARCHAR(60) won't help - the fields are still all 60 chars long....
What you really need to do is run a TRIM function over all your fields to reduce them back to their trimmed length, and then do a database shrinking.
After you've done that, you need to REBUILD your clustered index in order to reclaim some of that wasted space. The clustered index is really where your data lives - you can rebuild it like this:
ALTER INDEX IndexName ON YourTable REBUILD
By default, your primary key is your clustered index (unless you've specified otherwise).
Marc
I know I'm not answering your question as you are asking, but have you considered archiving some of the data to a history table, and work with fewer rows?
Most of the times you might think at first glance that you need all that data all the time but when actually sitting down and examining it, there are cases where that's not true. Or at least I've experienced that situation before.
I had a similar problem here SQL Server, Converting NTEXT to NVARCHAR(MAX) that was related to changing ntext to nvarchar(max).
I had to do an UPDATE MyTable SET MyValue = MyValue in order to get it to resize everything nicely.
This obviously takes quite a long time with a lot of records. There were a number of suggestions as how better to do it. They key one was a temporary flag indicated if it had been done or not and then updating a few thousand at a time in a loop until it was all done. This meant I had "some" control over how much it was doing.
On another note though, if you really want to shrink the database as much as possible, it can help if you turn the recovery model down to simple, shrink the transaction logs, reorganise all the data in the pages, then set it back to full recovery model. Be careful though, shrinking of databases is generally not advisable, and if you reduce the recovery model of a live database you are asking for something to go wrong.
Alternatively, you could do a full table rebuild to ensure there's no extra data hanging around anywhere:
CREATE TABLE tmp_table(<column definitions>);
GO
INSERT INTO tmp_table(<columns>) SELECT <columns> FROM <table>;
GO
DROP TABLE <table>;
GO
EXEC sp_rename N'tmp_table', N'<table>';
GO
Of course, things get more complicated with identity, indexes, etc etc...