Wrong SIZE, MAXSIZE and FILEGROWTH? - sql-server

I know that basically while creating a database, the model system DB is copied so based on the pictures below:
Why the initial size is 3MB for PRIMARY file and 1MB for LOG If the documentation clearly says that It should be 8MB for versions above 2016 and 1MB for anything lower (I'm on this category as I'm using 2014)
I understand that a log file could grow to 2TB maximum but why the Model database says unlimited and the STACK database says limited to 2TB?
Assuming that the actual default size is indeed 3MB and 1MB, why on the disk, I see 2240 KB and 560 KB?

Although the documentation for 2016 and later versions do say that the initial size is 8MB, I can't find references to the 1MB initial size in the 2005, 2008, 2008 R2, 2012 and 2014 versions. I don't have a 2014 version at hand but in 2008 R2 the initial size of model is also 3MB, so it seems to be the default initial value (maybe you confused initial size with the default autogrow value of 1MB).
In this point the documentation doesn't seem to be 100% accurate, because in all versions prior to 2016 they list the default autogrow of the primary data file as 10% when the real value is 1MB. Also, for the model log file the maximum size is Unrestricted/Unlimited (if you do select * from model..sysfiles the value of the size column in log row is -1) but when you create a new database the maximum size of the log file is indeed 2TB, I think that this is explained by this paragraph of the 2014 documentation:
If you modify the model database, all databases created afterward will inherit those changes. For example, you could set permissions or database options, or add objects such as tables, functions, or stored procedures. File properties of the model database are an exception, and are ignored except the initial size of the data file.
So I think that the documentation just coalesce the two facts, meaning that the maximum value of the log file for new databases is 2TB.
Because that column of SSMS don't shows decimal values, so the real value has to be rounded to an integer. Instead of using ROUND() probably they use CEILING() just to be in safe side (at least for low values, if not a size of e.g. 490KB would be reported as 0MB).

Related

Why does SQL Server Image data type have max_length set to 16 in sys.all_columns

While checking the column definition in SQL Server via SYS.all_columns, I found that the max_length of image data type is set as 16. Can anyone help explain the meaning of this?
The 16 you see there does not refer to the max size of the data, it refers to the size of the pointer to the LOB value (which by default is stored off-row).
The documentation for sys.columns mentions the 16 bytes, but does a poor job of explaining why (and it only says it applies to text but in fact it applies to text, ntext, and image). I recently submitted a PR to make this more clear, and it was approved (should publish next week or so).
There is a much better little info nugget on the TEXTPTR topic, where one of the examples says:
returns the 16-byte text pointer
Note that the documentation for image has had this warning since SQL Server 2005 was released more than 16 years ago:
IMPORTANT! ntext, text, and image data types will be removed in a future version of SQL Server. Avoid using these data types in new development work, and plan to modify applications that currently use them. Use nvarchar(max), varchar(max), and varbinary(max) instead.
I also blogged about this just a few months ago:
Deprecated features to take out of your toolbox – Part 3

ExecutionLogDaysKept paramter is set to -1 (SSRS)

I'm managing a SQL Server with reporting services running on it. The ReportServer database is getting too big. When I checked which table is occupying a lot of space, [ExecutionLogStorage] came up. I found out that the data stored in this log table can be manipulated by ExecutionLogDaysKept property. WHen I checked ConfigurationInfo, the property is set to -1. Is that the default value set when SSRS is installed? from what I read SSRS will only store 2 months worth of data in this table but I see data from 2011 which I suspect from when SSRS is installed. I would like to understand the significance of -1. If it's confirmed that it's incorrect I'll go ahead and set appropriate value for my environment.
The default is usually 60, which keeps about two months of data.
Setting the value to -1 will keep the data 'forever', as you are finding (if you delete a report, log data for it is also dropped).
Setting the value to 0 does not keep any data - there are some references out there that INCORRECTLY tell you that setting a value of 0 has the effect that -1 actually does (don't ask me how I know...).
I like to set this to 400 - that way you have over a year's worth of data, which can be handy if you want to do analysis over time. Also, you will pick up reports that only run annually, if that's a concern.
Most installations change enough in a year that there's not much point to keeping more data.

Is there a faster way to delete the first x rows from a DBF?

I have been trying to use CDBFLite to delete records of a DBF file from records 1 to 5 million or so (in order to decrease the filesize). Due to factors beyond my control, this is something I will have to do every day. The filesize exceeds 2 GB.
However, it takes forever to run the delete commands. Is there a faster way to just eliminate the first X records of a DBF (and thus result in a smaller filesize) ?
As noted by Ethan, if a .DBF file, it typically caps at standard 32-bit OS capacity of 2-gig per single file when it comes to .DBFs unless you are dealing with another software engine such as SyBase Database Advantage which can read/write to .DBF files and exceed the 2 gig capacity.
That said, the DBF standard format has a single character on each record as a "flag" that the record is deleted, yet still retains the space. In order to reduce the size, you would need to PACK the file which actually REMOVES the deleted records and thus will reduce the file size back down.
Now Ethan has options via Python, and I via C#.net and using Microsoft Visual Foxpro OleDb Provider and can offer more, but don't know what you have access to.
If you have VFP (or dBASE) directly, then it should be as simple as getting to the command window and doing
USE [YourTable] exclusive
pack
But I would make a backup copy of the file first as simple precaution.
Here's a very rough outline using my dbf package:
import dbf
import shutil
database = r'\some\path\to\database.dbf'
backup = r'\some\backup\path\database.backup.dbf')
# make backup copy
shutil.copy(database, backup)
# open copy
backup = dbf.Table(backup)
# overwrite original
database = backup.new(database)
# copy over the last xxx records
with dbf.Tables(backup, database):
for record in backup[-10000:]:
database.append(record)
I suspect copying over the last however many records you want will be quicker than packing.

Max real space in a varbinary(max) in SQL Server

I am saving files (any type ) in a SQL table, using a varbinary(max), I find out that the max usage of this datatype is 8000, but what does the 8000 mean?
The online documentation says that is 8000 bytes. Does that mean that the maximum size of the file to be save there is 8000/1024 = 7.8125 KB?
I start testing and the maximum file that I can store is 29.9 MB. If I choose a larger file a get a SQLException.
String or binary data would be truncated. The statement has been
terminated.
Implement SQL Server 2012 (codename Denali) when it's released - it has FileTable feature :)
varbinary(8000) is limited by 8000 bytes - that's for sure!
varbinary(max) is limited by 2 gigabytes
varbinary(max) FILESTREAM is limited by your file system (FAT32 - 2 Gb, NTFS - 16 exabytes)
Taken from here:
http://msdn.microsoft.com/en-us/library/ms188362.aspx:
max indicates that the maximum storage size is 2³¹-1 bytes
which is 2 147 483 647 bytes. I'm not sure why it stops at 29.9MB.
What version of SQL Server are you using?
Varbinary on MSDN for SQL Server 2008 explicitly says that VarBinary(MAX) is for use when "the column data entries exceed 8,000 bytes."
Also, I would also take a look at the Filestream Capabilities in SQL Server 2008 if that is the server you are using.
I got the "String or binary data would be truncated" error when trying to store 5MB using varbinary(max) on SQL Server 2005. Increasing the autogrowth size for the database solved the problem. Took me a while to figure out, so just thought I'd share :)

Size limit problem with ntext in Sql Server CE 3.5 and Visual Studio

I am experiencing some strange behavior with SQL Server CE 3.5 SP2.
I have a table with 2 columns; one of type int named ID, which is the primary key, and one of type ntext named 'Value'. The 'Value' column is supposed to contain rather long string values. However, when I try to store a string that is longer than 4000 characters, the value turns into an empty string!
I am using the Visual Studio 2010 server explorer to do this.
What's going on? I thought this 4000 character limit was for nvarchar and that ntext had a 2GB limit. Am I forgetting something or are these limits different for SQL Server CE? MSDN is not very clear on this.
According to the documentation the limit is 536,870,911 characters:
http://msdn.microsoft.com/en-us/library/ms172424(v=SQL.100).aspx
This seems to explain what you're seeing:
http://social.msdn.microsoft.com/Forums/en-US/sqlce/thread/9fe8e826-7c20-466c-8140-4d3b0649ac09
Alright, after trying a lot of things and reading many obscure posts on the subject, it turned out to be not a sql server CE problem at all, but an issue with Visual Studio.
There is a setting under Options->Database Tools->Query results, that specifies the maximum numbers of characters retrieved from a query. What happened was that after the string was entered in the Server Explorer table editor, it was actually persisted in SQL Server CE but visual studio could not display it due to the aforementioned setting.
The data type NTEXT SQL Server CE can actually store up to 536870911 characters.
This represents a physical space of 1073741822 bytes or about 1 gigabyte, or half of 2Gbytes that SQL Server would store.
But this ability is not so much: Observe other factors that limit this ability.
First, the data file can store a maximum of slightly less than 4 gigabytes, given the reservation of space needed will change pages. A single record with a quarter of that size, so it will be quite time consuming to be loaded, and may appear to be blank spaces (not null) when in fact it is not.
Second, observe some caution with commands to select the data, which can inadvertently convert one type to another.
As an example, can occur converting NTEXT to NVARCHAR, for example, when using an ALIAS for the selected fields. E values ​​above the capacity of the NVARCHAR in fields converted, may appear as blank spaces too.

Resources