We are getting this error on a table in our database:
Cannot create a row of size 8937 which is greater than the allowable
maximum of 8060.
The table consists of about 400 varchar(max) fields. We are, however, only inserting empty strings into these fields.
The insert seems to work, however when using SqlXml to read the data or when running DBCC DBREINDEX on the primary key of the table, the error occurs.
It is only occurring on one particular SQL Server (2005) and not on others (2005 Express). The problem machine is running 64-bit Windows and the others are running 32-bit windows.
Has anyone got any ideas about this? Please let me know if I need to include any more information.
I'd like to point out that I completely agree that it is rather extreme, unusual and not at all sensible to be attempting to use this many varchar(max) columns. There are reasons for it, mainly not under my control, that I will not go into here.
The error is caused because you cannot have a row in SQL server which is larger than 8KB (the size of 1 page) because rows are not allowed to span pages - its a basic limit of SQL Server, you can read more about it here:
Database Basics Quick Note - The difference in Varchar and Nvarchar data types
Note that SQL server will allow you to create the table, however if you try to actually insert any data which spans multiple pages then it will give the above error.
Of course this doesn't quite add up, because if the above was the whole truth then single VARCHAR(8000) column would fill a row in a table! (This used to be the case). SQL Server 2005 got around this limitation by allowing certain data from a row to be stored in another page, and instead leaving a 24-byte pointer instead. You can read about this here:
How Sql Server 2005 bypasses the 8KB row size limitation
Maximum Row Size in SQL Server 2005 to the Limit
As you can see this now means that rows can now span multiple pages, however single column rows still need to fit into a single page (hence the maximum size of a column being VARCHAR(8000)) and there is still a limit on the total number of such columns you can have (around 8000 / 24 = ~300 by my estimate)
Of course this is all missing the main point, which is that 400 wide columns on a single table is absurd!!!
You should take a long hard look at your database schema and come up with something more reasonable - you could start with choosing some more conservative estimates on column sizes (like VARCHAR(255) or VARCHAR(50)), but you really need to split some of those fields out into separate tables.
You might have a deleted column in the table that still takes up space. Also check the "text in row" settings are the same.
The row size is determined by the types of the columns, not how much data you store in them.
Have 400 varchar fields in a single table tells me you are doing something wrong. Perhaps you need to normalize the schema?
I have this problem today. but this table's column is small and only hundreds rows. and this table work good before. I cost many time to research.
finally, I backup this table data. I delete the table. and create same new table; then restore data. everything is ok.
It shows the old table maybe crashed inside. maybe SQL server bug. anyway, issue is gone. hope save others time.
backgroud: MS SQL 2012
Related
I have a table with 102 columns and 43200 rows. Id column is an identity column and 2 columns have an unique index.
When I just execute
Select *
from MyTable
it takes almost 8 minutes+ over the network.
This table has a Status column which contains 1 or 0. If I select with where Status = 1, then I'm getting 31565 rows and the select is taking 6 minutes+. For your information status 1 completed and will not change ever anymore. But 0 status is working in progress and the rows are changing different columns value by different user stage.
When I select with Status = 0, it takes 1.43 minutes and returns 11568 rows.
How can I increase performance for completed and WIP status query separately or cumulatively? Can I somehow use caching?
The SQL server takes care of caching. At least as long as there is enough free RAM. When it take so long to get the data at first you need to find the bottleneck.
RAM: Is there enough to hold the full table? And is the SQL server configured to use it?
Is there an upper limit to RAM usage? If not SQL server assumes unlimited RAM and this will often end caching in page file, which causes massive slow downs
You said "8+ minutes through network". How long does it take on local execution? Maybe the network is slow
Hard drive: When the table is too big to be held in RAM it gets read from hard drive. HDDs are somewhat slow. Maybe defragmenting the indices could help here (at least somewhat)
If none helps, the SQL profiler might help to show you where the bottleneck actually is to find
This is an interesting question, but it's a little open-ended, more info is needed. I totally agree with allmhuran's comment that maybe you shouldn't be using "select * ..." for a large table. (It could in fact be posted as an answer, it deserves upvotes).
I suspect there may be design issues - Are you using BLOB's? Is the data at least partially normalized? ref https://en.wikipedia.org/wiki/Database_normalization
I Suggest create a non clustered index on "Status" Column. It improves your queries with Where Clause that uses this column.
I was using nvarchar(300) earlier in my SQL Server.
I was setting the attribute on the entity like this:
#Column(name="stud_desc" length= 300)
private String studentDescription;
Now because of some reasons, I have to remove the length=300 validation part and I set the column in the database to nvarchar(max).
Now while inserting data into the table, I have noticed that the insert query takes a little bit more time.
Although most of the character length is < 1k, some are also over 8k.
My question is: does using the above column datatype in SQL Server have any performance impact?
Also what would be the safer alternative? ( if anything)
yes, the server engine will begin to prepare to dedicate resources based on the potential bit length parameter.
Even Though You do not fill in the values to the maximum, due to the way SQL Server stores and pages data, it makes sure to dedicate the amount of space needed based on what's defined in the structure.
The alternative for this might be a dedicated 1:(0..1) relationship table, where You have the StudentID and Description. Thus, if the Description is not needed, it might be faster. Otherwise, it's all about managing data.
Due to this issue, select tables in our db have a fixed row size although we need more.
Has this been fixed in SQL Server 2008 Express?
Where can I get the details?
For best performance, what is the best way to handle large row sizes in SQL Server 2008 Express?
Personally I usually prefer to create a second table that has a one to one relationship with the first table and put the extra columns there. It is best if you can keep the most used columns in the first table and the least used in the related table.
The ~8K page size still exists, however it is mitigated for some types with the use of MAX.
The issue does still exist.
The best way to handle very wide rows like this is to normalize your data so you break them up into multiple tables. If you have one or two very wide fields, move them to another table and JOIN to that table if you need that data.
Came across this error today. Wondering if anyone can tell me what it means:
Cannot sort a row of size 9522, which is greater than the allowable maximum of 8094.
Is that 8094 bytes? Characters? Fields? Is this a problem joining multiple tables that are exceeding some limit?
In SQL 2000, the row limit is 8K bytes, which is the same size as a page in memory.
[Edit]
In 2005, the page size is the same (8K), but the database uses pointers on the row in the page to point to other pages that contain larger fields. This allows 2005 to overcome the 8K row size limitation.
The problem that seems to catch a lot of people, is that you can create a table that by definition would hold more than 8K of data, and it will accept it just fine. And the table will work fine, up until the point you actually try to insert more than 8K of data into the table.
So, let's say you create a table with an integer field, for the primary key, and 10 varchar(1000) fields. The table would work fine most of the time, as the number of times you would fill up all 10 of your varchar(1000) fields would be very few. Howerver, in the even that you tried to put 1000 characters in each of your fields, it would give you the error mentioned in this question.
FYI, running this SQL command on your DB can fix the problem if it is caused by space that needs to be reclaimed after dropping variable length columns:
DBCC CLEANTABLE (0,[dbo.TableName])
See: http://msdn.microsoft.com/en-us/library/ms174418.aspx
That used to be a problem in SQL 2000, but I thought that was fixed in 2005.
8094 bytes.
If you list some more information about what you are doing it might help us to figure out the actual cause.
Have you ever seen any of there error messages?
-- SQL Server 2000
Could not allocate ancillary table for view or function resolution.
The maximum number of tables in a query (256) was exceeded.
-- SQL Server 2005
Too many table names in the query. The maximum allowable is 256.
If yes, what have you done?
Given up? Convinced the customer to simplify their demands? Denormalized the database?
#(everyone wanting me to post the query):
I'm not sure if I can paste 70 kilobytes of code in the answer editing window.
Even if I can this this won't help since this 70 kilobytes of code will reference 20 or 30 views that I would also have to post since otherwise the code will be meaningless.
I don't want to sound like I am boasting here but the problem is not in the queries. The queries are optimal (or at least almost optimal). I have spent countless hours optimizing them, looking for every single column and every single table that can be removed. Imagine a report that has 200 or 300 columns that has to be filled with a single SELECT statement (because that's how it was designed a few years ago when it was still a small report).
For SQL Server 2005, I'd recommend using table variables and partially building the data as you go.
To do this, create a table variable that represents your final result set you want to send to the user.
Then find your primary table (say the orders table in your example above) and pull that data, plus a bit of supplementary data that is only say one join away (customer name, product name). You can do a SELECT INTO to put this straight into your table variable.
From there, iterate through the table and for each row, do a bunch of small SELECT queries that retrieves all the supplemental data you need for your result set. Insert these into each column as you go.
Once complete, you can then do a simple SELECT * from your table variable and return this result set to the user.
I don't have any hard numbers for this, but there have been three distinct instances that I have worked on to date where doing these smaller queries has actually worked faster than doing one massive select query with a bunch of joins.
#chopeen You could change the way you're calculating these statistics, and instead keep a separate table of all per-product stats.. when an order is placed, loop through the products and update the appropriate records in the stats table. This would shift a lot of the calculation load to the checkout page rather than running everything in one huge query when running a report. Of course there are some stats that aren't going to work as well this way, e.g. tracking customers' next purchases after purchasing a particular product.
This would happen all the time when writing Reporting Services Reports for Dynamics CRM installations running on SQL Server 2000. CRM has a nicely normalised data schema which results in a lot of joins. There's actually a hotfix around that will up the limit from 256 to a whopping 260: http://support.microsoft.com/kb/818406 (we always thought this a great joke on the part of the SQL Server team).
The solution, as Dillie-O aludes to, is to identify appropriate "sub-joins" (preferably ones that are used multiple times) and factor them out into temp-table variables that you then use in your main joins. It's a major PIA and often kills performance. I'm sorry for you.
#Kevin, love that tee -- says it all :-).
I have never come across this kind of situation, and to be honest the idea of referencing > 256 tables in a query fills me with a mortal dread.
Your first question should probably by "Why so many?", closely followed by "what bits of information do I NOT need?" I'd be worried that the amount of data being returned from such a query would begin to impact performance of the application quite severely, too.
I'd like to see that query, but I imagine it's some problem with some sort of iterator, and while I can't think of any situations where its possible, I bet it's from a bad while/case/cursor or a ton of poorly implemented views.
Post the query :D
Also I feel like one of the possible problems could be having a ton (read 200+) of name/value tables which could condensed into a single lookup table.
I had this same problem... my development box runs SQL Server 2008 (the view worked fine) but on production (with SQL Server 2005) the view didn't. I ended up creating views to avoid this limitation, using the new views as part of the query in the view that threw the error.
Kind of silly considering the logical execution is the same...
Had the same issue in SQL Server 2005 (worked in 2008) when I wanted to create a view. I resolved the issue by creating a stored procedure instead of a view.