PRIMARY filegroup is full - sql-server

Since there is no space left on my SQL server, I keep getting the following error and I have to delete data to get the system working again.
The hosting company says we do not have unlimited package options. They suggest that we switch to the server. Apart from that, I saw the 'shrink' on the internet, but will this damage the database or will it be the definitive solution for me?
Could not allocate space for object in database because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.

As has been mentioned in the comments, Shrink Database won't help you because SQL Server will use all free space in the file before it attempts to grow it.
So your options are as follows:
Archive or delete old data
If you have audit tables, large blobs that no longer need to be accessed, or other data that doesn't need to be accessed within your application any more, you can SELECT it out of the database, store it in a file somewhere as an archive (or retrieve it from an older backup when you need it).
Apply table compression
If you are running Enterprise SQL, or a version of Standard/Express/Web after 2016 SP1, you can apply Compression to larger tables. This converts CHAR columns to be physically stored as VARCHAR columns, among other things, and I have seen it gain 1/3 space back on some SQL databases. However, you will run into the problem of needing space initially to perform the compression in, so you may need to apply this in concert with #1. A restart of the database service may also give you some space back from tempdb in which to operate if your tempdb has filled.
Upgrade storage space
If your host can't meet your storage space needs and #1 and #2 are not possible, it may be time to look at changing hosts. This may be a good idea anyway as your database will no doubt eventually grow back to bigger than its current size.

Keep in mind that if you have microsoft sql express server the license limits you to about 10 GB per database so you cannot increase the primary group more than that. For Sql Express Server before 2008 the size limit is even less.

Related

How can I link a SQL image column to an external database?

I need a bit of a help with the following.
Note: in the following scenario, I do not have access to the application's source code, therefore I can only make changes at the database level.
Our database uses dbo.[BLOB] to store all kinds of files and documents. The table uses an IMAGE (yeah, obsolete) data type. Since this particular table is growing quite fast, I was thinking to implement some archiving feature.
My idea is to move all files older than X months to a second database, and then somehow link from the dbo.[BLOB] table to the external/archiving database.
Is this even possible? The goal is to reduce the database size, in order to improve backup and query performance.
Any ideas and hints much appreciated.
Thanks.
Fabian
There are 2 features to help you with backup speed and database size in this case:
Filestream will allow you to store BLOBS as files on the file system instead of in database file. It complicates backup scenario, you have to backup both database and files but you get smaller database file along with faster access time to documents. It is much faster to read file from filesystem than from blob column. Additionally filestream allows for files bigger than 2GB.
Partitioning will split table into smaller chunks on physical level. This way you do not need to access application code to change where particular rows are stored physically and decide which data needs to be accessed fast and put it on SSD drive and which can land on slower archive. This way you can have more frequent backups on current partition, while less frequent on archive.
Prior to SQL Server 2016 SP1 - this feature was available in Enterprise version only. For SQL Server 2016 SP1 this is available in all editions.
In your case most likely you should go with filestream first.
W/o modifying the application you can do, basically, nothing. You may try to see if changing the column type will be tolerated by the application (very unlikely, 99.99% it will break the app) and try to use FILESTREAM, but even if you succeed it does not give much benefits (backup size will be the same, for example).
A second thing you can try is to replace the table with a view, using INSTEAD OF triggers for updates. It is still very likely to break the application (lets say 99.98%). The goal would be to have a distributed partitioned view (or cross DB partitioned view) which presents to the application an unified view of the 'cold' and 'hot' data. Is complex, error prone, but it will reduce the size of the backups (as long as data is moved from hot to cold and cold data is immutable, requiring few backups).
The goal is to reduce the database size, in order to improve backup and query performance.
To reduce the backup size, as I explained above, you can do, basically, nothing. But performance you need to investigate it and address it appropriately, based on your findings. Saying the the database is slow 'because of BLOBs' is hand-waving.

SQL Server 2005 TempDB Size

We are using SQL Server 2005. Recently SQL server 2005 crashed in our production environment due to large tempdb size.
1) what could be reason for large tempdb size?
2) Is there any way to look what data is there in tempdb?
2) Is there any way to look what data is there in tempdb?
No, because it is not kept there. Tempdb has very special treatment, like being dropped on every server restart.
1) what could be reason for large tempdb size?
Inefficient SQL, maintenance jobs or just the data at hand. Obviously a 800gb, 6000gb database may require more tempdb space than a 4gb online crm attempt. You dont really specify ANY size in absolute terms. What IS large? I have tempdb databases hardcoded at 64gb ony my smaller servers.
Typical SQL that goes into Tempdb are:
Sorts that are not solvable as part of the query (you need to store keys SOMEWHERE)
DISCTINCT. Needs all returned data in tempdb to find doubles.
Certain poerations psossibly during joins.
Tempdb usage (temporary tables). I just mention them becasue I often keep some hundred megabytes worth of data in them during loads and scrubbing.
In general you can find those queries by having hugh IO stats in the query log, or simply being slow.
That said, maintenance plans also go int there, but with reason. At the end, your "large" is possibly mine "not even worth mentioning tiny". It really depends what you do. Use the query trace tool to find out what takes long.
Physically Tempdb is very special in treatment - sql server does NOT write to the file if it does not have to (i.e. keeps thigns in memory). Writes to the disc are a sign of memory flowing ofer. This is different from normal db write behavior. Tempdb, IF it flows over, is best put onto a decently fast SSD... which wont normally be SO expensive because it still will be relatively small.
Use the query here to find other queries for tempdb - basicaly you are fishing in dirty water here, need to try out things until you find the culprit.
The usual way to grow a SQL Server database — any database, not just tempdb — is to have it's data and log files set to autogrow (especially the log files). SQL Server is perfectly happy grow the log and data files until the consume all the disk space available to them.
Best practice, IMHO, is to allow limited autogrowth on the data files (put an upper bound on how big it can grow) and fix the size of the log files. You might need to do some analysis to figure out how big the log files need to be. For tempdb, especially, the recovery model should be set to simple, too.
Ok tempdb is a kinda special database. Any temporary objects you use in procedures etc, is created here. So if you application uses a lot of temp tables in queries, they will all reside here, but they should clean themselves up after the connection (spid) is reset.
The other thing that can grow a tempdb is database maintenance tasks, however they will take a larger toll on the database log files.
Tempdb is also cleared every time you restart the SQL Service. It basically drops the database and re-create it. I agree with #Nic about leaving tempdb as it is, dont muck around with it, any issues with space in tempdb, usually indicates another larger problem somewhere else. More space will mask the problem, but only for so long. How much free space does your drive have that you have tempdb on?
Other thing, if not already, try and put tempdb on it's own drive, and one more if possible, have the data and the log files on their own separate drives.
So, if you dont restart your SQL Server/Service, your drive will run out of space pretty soon.,
use tempdb
select (size*8) as FileSizeKB from sys.database_files

Should I store file in database or just the location to that file?

Which is the better practice to store file? Directly store the file in database or just the location to that file?
Avoid storing files in your database. Most don't deal with them well.
It depends. You need to consider several things.
If you have a mickey mouse freeware database, meaning that it does not handle blobs appropriately (reads the blobs on every SELECT; does not store the blobs in a separate physical structure to the row; very slow with blobs; etc)
keep the files outside, store only the location
manually deal with the syncing of row.location vs the file system
If you have an enterprise SQL Platform, it is no problem at all to keep the blobs inside the database. In fact, retrieval is faster. These do not read the blobs on every SELECT, they are stored in a separate physical structure to the rows. The one extra read to get the blob if the SELECT requests it, is not a "performance problem".
The PAGESIZE in genuine SQL databases can be set as 2k; 4k; 8k; or 16k.
2k is perfect for OLTP (small rows, small Transactions: you do not want to move 8K on every IO operation)
larger sizes are relevant based on how much OLAP you cater for
in your case, the average size of the files
there will be some waste in the unused portion of the last page, per row/blob.
The disadvantage of keeping the blobs in the database is, your database backups will be significantly larger.
Some enterprise databases (eg. SAP/Sybase) recognise that a page has not changed, and excludes it from the incremental backups
others have no incremental database backups.
The advantage of keeping the blobs in the database is:
data and referential integrity. You will not have the problem of having the rows that are out of synch with the blobs
the blobs are included in the backup: otherwise, upon a restore, the task of syncing the restored database with the restored files is a major problem.
I completed an assignment last year, where the customer had 130GB of data in the db, and 700GB of documents stored outside the db. After ten years of problems, they bit the bullet, and moved the documents into the db.
Guess what, what was supposed to be a simple job (long but simple, because the references were supposed to be absolutely correct), ended up being massive, because there were so many (a) duplicates, and (b) invalid references.
The resulting database was 630GB, there were 100GB of dupes. 2K pagesize.
Responses to Comments
Slash or Backslash
Easy.
In the database, store slash only.
You need a way of identifying the target system, and an IsWindoze indicator. It should be higher up in the table hierarchy, not at the level where the Filename is located.
Whenever you report or display the Filename column, if IsWindoze, change the slashes to backslashes.
You will have a similar problem with the DriveLetter and colon D:, which Unix does not have. Allow it only if IsWindoze.
Late answer: it depends on your engine.
A page size of 2k hasn't been used since the 1990s for SQL Server. Oracle defaults to 8K, SQL Server is 8K. Only Sybase AFAIK is still in the last century.
SQL Server now offers FILESTREAM which combines the best of both worlds, as Oracle has done for longer with BFILE
SQL Server and Oracle offer on disk and backup compression
I'm sure PostgresSQL at least offers similar features.
Note: this is mainly to offer alternatives to PerformanceDBA's FUD
The preferred method is to store the file in the filesystem and store the location of the file in the database. The reasoning for this has to do with how databases physically allocate space on disk (usually in 8k or 16k chunks). Dropping large files in there causes your database to use different mechanisms to store the files (SQL Server calls this row overflow data). Typically these kind of pages are located out of the normal table, so every logical read for a row results in two physical reads on disk. Needless to say, this isn't good for performance.

SQL Server filegroup full during a large INSERT INTO statement

Consider a SQL script designed to copy rows from one table to another in a SQL 2000 database. The transfer involves 750,000 rows in a simple:
INSERT INTO TableB([ColA],[ColB]....[ColG])
SELECT [ColA],[ColB]....[ColG]
FROM TableA
This is a long running query, perhaps in part because ColB is of type ntext.
There are a handful of CONVERT() operations in the SELECT statement.
The difficulty is that after ~15 mins of operation, this exception is raised by SQL Server.
Could not allocate space for object '[TABLE]'.'[PRIMARY_KEY]' in database '[DB]' because the 'PRIMARY' filegroup is full.
Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.
More Info
Autogrowth is already on.
there is more than enough free space on disk (~20gb)
the single .mdf is ~6gb
no triggers on the source or target tables
Question
What options need to be set, either via Management Studio, or via T-SQL to allow the database to grow as required? What other remedies would you suggest?
Resolution
The db could not grow as needed because I was hosting this database on an instance of SQL Server 2008 Express. Upgrading to a non-neutered version of SQL Server will solve this problem.
Best advice: Pre-size your database larger, instead of forcing it to grow on-demand (which can be a slow operation).
One reason this error could occur is if your autogrow interval is set too large. Besides the obvious (trying to grow by 25GB with only 20GB on disk), a large growth interval can take a very long time to allocate, which can cause your query to time out.
EDIT: Based on your new screenshot, doesn't look like the interval is the problem. But, my original advice still stands. Try to manually grow the database yourself, and see if it lets you:
ALTER DATABASE foobar
MODIFY FILE (name = foobar_data, size = 5000)
If you can share a screenshot/information on your PRIMARY filegroup makeup and autogrow settings (i.e. all files included in the PRIMARY and the autogrow settings for each) that would be helpful as well. First thought without seeing anything additional would be that you potentially have a maxFileSize specified for one/more of the files that makeup your PRIMARY group, but that's just a hunch without actually seeing the information.
Is there any triggers on the table in question? I've seen similar results before when there was triggers and it was actually the expansion of the log file (ldf) which reached the limit just logging all the queries that were being ran by triggers and not the mdf itself. If there is any triggers I'd consider disabling them while you do this update and seeing if that helps (I assume this is a 1-off data migration rather than a recurring event?)

primary filegroup full

I discovered an error in a log file while troubleshooting another issue on our SQL 2000 server. I am seeing primary filegroup is full. The MDF and LDF files are in the default location on the system partition, on an NTFS drive. The MDF file is 1962MB in size. Some posts have indicated that the size can't excede 2GB. I ran a db shrinkdatabase against it but it does not appear to have changed the size. Is there a command I need to run first to remove old information first, before running the shrink?
When I go into the Enterprise manager I have 2 SQL groups. Once is local and the other is listed by the server name. The database problem is happening on the 2nd gorup. When I try to manually increase the size of the data file it says due to licensing restrictions I am limited to 2048 MB. The SQL instances in the other group allow me to change that number above 2048 MB.
The 2GB database size limit only applies to MSDE (the free, "desktop" version of SQL 2000). Other SQL 2000 version don't have that limitation.
There is no "magic" way to purge or archive old, historical data. You have to know your database and its structure, your customer requirements, and your data retention needs.
You can try this:
ALTER DATABASE foo ADD FILE (
NAME = 'file2',
FILENAME = 'C:\PATH\TO\FILE.ndf',
SIZE = 100MB,
MAXSIZE = UNLIMITED,
FILEGROWTH = 10MB;
Make sure you back up first.
Also, are you running SQL Server Express? There is a definite 2GB limit there...
MDF files are where the data is stored, so it's possible it holds that much data.
I've got a MDF file of 13GB so I don't believe that limit is correct.
Regarding shrinking this, by removing data, you might delete some data, and then shrink it, or you could extend that filegroup, by adding another data file to it.
Alternatively create a database maintainence plan to reduce the amount of free space per page, and also possibly remove unused space.
I would suggest you extend it, and then review your maintenance plans to ensure they are correct.

Resources