I have a backup query like this:
BACKUP DATABASE #temp_baza TO DISK = #temp_bak
BACKUP LOG #temp_baza TO DISK = #temp_log
It is run by sqlcmd like this:
sqlcmd -l 120 -S %SQL_SERVER% -i %KOPIA_KATALOG%backupPELNY.sql
-o %KOPIA_KATALOG%output_PELNY.txt -v NAZWA_BAZY="%NAZWA_BAZY%"
-v KOPIA="%PELNY_KOPIA%\"
In output_PELNY.txt I have this kind of results:
BACKUP DATABASE successfully processed 645127 pages in 2819.651 seconds (**1.787 MB/sec**).
or
BACKUP DATABASE successfully processed 26338 pages in 227.348 seconds (**0.905 MB/sec**).
Main DB is on one Disc, backup is on second disk.
When I use Explorer to copy files between these two disks, I get a transfer speed of approximately 100 MB/s.
QUESTION:
Why is the backup speed so slow - I mean less than 3MB/s?
REMARKS:
Windows 2012 Essential + SQL Server 2008 R2 EXPRESS
Intel XEON E3-1270 v3 + 16GB RAM
To solve your problem, please refer to the following links:
Options to Improve SQL Server Backup Performance
Is your SQL Server backup running slow? Here’s how you can speed it up
Super-Fast Backup and Restore Throughput for SQL Server
MS SQL Server backup optimization
How to Make SQL Server Backups Go Faster
Below are some sample queries for download
To solve the problem, I suggest that the following part of the query:
backup database DBNAME to disk='C:\1\DBNAME.bak' with buffercount=16 ,maxtransfersize=4194304
SQL Server 2008 R2 Express has the following limitations:
Constrained to a single CPU (this limitation was raised in 2012 to "The lesser of one socket or four cores", so multi-threading is possible)
1GB RAM
10GB database size (per database)
So your backup is likely to be CPU bound and/or memory bound.
(Of course, it raises the obvious question: Why are you using an express version on your server? and why not a later SQL Server version?)
Ref.: SQL Server 2008 R2 Express Database Size Limit Increased to 10GB:
What about CPU and memory limits? Are any other limits changed in SQL
Server 2008 R2 Express?
No, the database size limit is the only limit we updated in SQL Server 2008 R2 Express. SQL Server 2008 R2 Express is still limited
to 1 CPU and 1 GB of RAM.
Related
I'm using SQL server 2014 on a server that has 76GB RAM. the "max server memory" in SQL is 32 GB, database size is about 600GB.
SQL Memory Consumption hit the 99% and affect the server performance. can I know the reason?
thank you.
I found the problem !!
it was in a procedure that move data from SQL server to MySQL every minutes using Linked server.
linked server activities is not included in "max server memory" for that reason the memory get full.
When I attempt to restore the backup (.bak 1,32GO) database into SQL Server 2008 I get the following message:
CREATE DATABASE or ALTER DATABASE failed because the cumulative size
of the resulting database will exceed the license limits of 4096 MB
per database
A backup is often significantly smaller than the space required for a live database, presumably the database the backup came from is over 4096Mb so sql cannot restore it. As per the comments, you'll need to get a new license to be able to restore this db.
Lately i was experiencing heavy RAM consumption on server and after finding out which app is using the most it showed sqlservr.exe is taking 890,016KB.
I want to know why does SQL take up so much of my server's RAM. My SQL performs simple functions on tables, store procedures and function and no jobs are assigned on the background.
I even tried restarting the server and after the restart when the SQL service started it took 90MB and after 8-9 users connected it the usage went back to 800-900MB.
Server : Windows Server 2008R2 Standard
SQL : SQL Server 2008 R2
Open SSMS, connect to your local instance an right-click on your instance name->properties->memory and check Minimum and Maximum server memory.
By default it will take a huge amount of memory, reduce your max memory if needed.
My organization is looking into using MS SQL Web Edition 2008 R2, by SPLA, on our virtual server.
Currenty we are using MS SQL Express Edition 2008 R2.
Looking at the features table of different MS SQL 2008 R2 editions, one of the things that Web Edition does not support and we need, is Backup compression.
Currently, since we use Express, all of our DBs are under 10 GB.
We do nightly full backup of each data base to a storage drive.
My concern is, that once the DB sizes will grow after we start using Web Edition, the backup time will start to grow segnificantly.
What is the best approach to handle large DBs backups on MS SQL Web Edition?
Is there a way / workaround to do a compression?
One solution would be to not take a full backup every day. Use differential backups almost every day (say, 6 out of 7 days a week) and do a full backup every so often (say, 1 day a week). The differentials should be significantly smaller than your full assuming that you don't have a lot of data churn going on in your database.
You could buy a third-party backup software for SQL Server. It is an incremental cost over the SQL Server license.
Or you could try to backup to a compressed NTFS folder.
Or you backup uncompressed and compress with 7-Zip which uses state-of-the-art compression algorithms that compress to a much smaller size than SQL Server does.
Quick question: is the 4GB limitation on EACH database or for the installed instance of SQL Server? As you know you can create more than one DB in an instance of SQL Server...
The 4GB limit is on each database.
Yes. It's per database, not total. If you want to be really clever you have store a lot more than 4GB by spreading it over multiple databases (or even multiple instances).
Note that size limit for each database has been raised to 10GB in SQL Server Express 2008 R2
According to SQL Server 2005 Express Edition Features:
Scalability and Performance
Supports one CPU, but can be installed on any server
1 gigabyte (GB) addressable RAM
4 GB maximum database size