Database size MSSQL - sql-server

I have two database: DB_1Large and DB_2Medium
DB_1Large.mdb have size 50 gb
DB_2Medium.mdb have size 16 gb
Problem:
Its look ridiculous, but backup of this databases have next sizes:
DB_1Large.bak - ~30 gb
DB_2Medium - ~12 gb
And after compress with win rar or studio this have next sizes:
DB_1Large.bak - ~2.5 gb
DB_2Medium - ~5.5 gb
Why and how can I make it smaller?
Exec sp_spaceused for both databases:
database_name database_size unallocated space
DB_1Large 52349.38 MB 20197.74 MB
reserved data index_size unused
30546184 KB 16273760 KB 13500336 KB 772088 KB
database_name database_size unallocated space
DB_2Medium 17144.19 MB 4672.13 MB
reserved data index_size unused
12457024 KB 10608232 KB 1809120 KB 39672 KB

Here COMPRESSION is Specified at the time backup,it will compress the database and take backup
Use master
Go
BACKUP DATABASE [DatabaseName]
TO DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Backup\DatabaseName_2017-08-07.bak'
WITH COPY_ONLY, NOFORMAT, NOINIT,
NAME = N'SqlClass-Full Database Backup',
SKIP, NOREWIND, NOUNLOAD, COMPRESSION, STATS = 5
GO

Assuming that you don't use COMPRESSION option, your databases have 30Gb and 12Gb of data accordingly. Backup backs up only data and not the empty space.
If you are interested in win rar compression ratio maybe you'd better search for its compression algorithm, maybe your second backup just is more "compressible" due to repeating combination of bytes

Related

Disk is full and cannot start MongoDB, How to drop databases or tables

root#mongo_node_1:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 42G 0 42G 0% /dev
tmpfs 8.3G 1.3M 8.3G 1% /run
/dev/sda2 2.9T 2.9T 0 100% /
tmpfs 42G 0 42G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 42G 0 42G 0% /sys/fs/cgroup
/dev/loop0 87M 87M 0 100% /snap/core/4917
/dev/loop1 90M 90M 0 100% /snap/core/8268
tmpfs 8.3G 0 8.3G 0% /run/user/0
root#mongo_node_1:~# e
I have deleted the 20G mongod log file, but the disk is still insufficient, so I can only delete some databases or tables to free the disk.
However, mongod cannot be started now. Can I delete the database or table without starting mongod?
By the way, there are three database nodes. Only the shard1 server disk is full.
If your shard has more then one replicaSet member you can delete the entire data folder content and the member will init sync its content from other members , if it is only one member and running under the default wiredTiger storage engine it is best to not delete files from the data folder since you could easily corrupt the content. It is best if you shutdwon the member , extend the partition offline and start the member again ...

ORA-00907: missing right parenthesis in CREATE DATABASE

I keep getting the same error while creating a database. It seems like something is wrong with the syntax. Here is how I create it:
CREATE DATABASE darkarmy
USER SYS IDENTIFIED BY admin
USER SYSTEM IDENTIFIED BY admin
LOGFILE GROUP 1 ('/u01/zdw07/logs/redo01a.log') SIZE 10M,
GROUP 2 ('/u01/zdw07/logs/redo02a.log') SIZE 10M,
GROUP 3 ('/u01/zdw07/logs/redo03a.log') SIZE 10M,
GROUP 4 ('/u01/zdw07/logs/redo04a.log’) SIZE 10M
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXLOGHISTORY 10
MAXDATAFILES 50
CHARACTER SET UTF8
NATIONAL CHARACTER SET UTF8
EXTENT MANAGEMENT LOCAL
DATAFILE '/u01/zdw07/darkarmy/node03/upike69.dbf' SIZE 100M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED
SYSAUX DATAFILE '/u01/zdw07/darkarmy/node02/jap41.dbf' SIZE 100M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED,
'/u01/zdw07/darkarmy/node02/nud37.dbf' SIZE 100M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED
DEFAULT TABLESPACE users
DATAFILE '/u01/zdw07/darkarmy/node03/arulice693.dbf'
SIZE 50M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED
DEFAULT TEMPORARY TABLESPACE temp
TEMPFILE '/u01/zdw07/darkarmy/temp01.dbf' SIZE 100M REUSE
UNDO TABLESPACE undotbs1
DATAFILE '/u01/zdw07/darkarmy/undotbs1.dbf' SIZE 100M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
The error message:
'/u01/zdw07/darkarmy/node03/upike69.dbf' SIZE 100M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED
*
ERROR at line 16:
ORA-00907: missing right parenthesis
I am new to Oracle, would be grateful for any help!
I think you have a problem here:
GROUP 4 ('/u01/zdw07/logs/redo04a.log’) SIZE 10M
The closing quote is ’ when it should be '

XFS inodes suddenly exhausted - but not because of IUsed going up

I got an alert that IUse% on my XFS filesystem had suddenly jumped from 3% to 96% used.
An hour or so later, it went back to 3%.
During the problem:
# df -i /data
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/VolGroup01-LogVol01 57082000 54388657 2693343 96% /data
After resolution:
# df -i /data
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/VolGroup01-LogVol01 2621197920 54375585 2566822335 3% /data
Note that IUsed (column 3) stays almost exactly the same -- in both cases there are ~54 million inodes used.
But during the problem, the number of inodes (column 2) changes drastically - from 2.3 billion (2300 million) - down to 57 million.
What could cause this?

UFS - how a 0 bytes file broke filesystem header?

For those reaching here; Unfortunately I could not recover the data, after various tries and reproducing the problem it was too costy to keep trying, so we just used a past backup to recreate the information needed
A human error broke an 150G UFS filesystem (Solaris).
When trying to do a backup of the filesytem (c0t0d0s3) the ufsdump(1M) hasn't been correctly used.
I will explain the background that led to this ...
The admin used:
# ufsdump 0f /dev/dsk/c0t0d0s3 > output_1
root#ats-br000432 # ufsdump 0f /dev/dsk/c0t0d0s3 > output_1
Usage: ufsdump [0123456789fustdWwnNDCcbavloS [argument]] filesystem
This is a bad usage, so it created a file called output_1 with 0 bytes:
# ls -la output_1
-rw-r--r-- 1 root root 0 abr 12 14:12 output_1
Then, the syntax used was:
# ufsdump 0f /dev/rdsk/c0t0d0s3 output_1
Which wrote that 0 bytes file output_1 to /dev/rdsk/c0t0d0s3 - which was the partition slice
Now, interestingly, due to being a 0 bytes file, we thought that this would cause no harm to the filesystem, but it did.
When trying to ls in the mountpoint, the partition claimed there was an I/O error, when umounting and mounting again, the filesystem showed no contents, but the disk space was still showing as used just like it was previously.
I assume, at some point, the filesystem 'header' was affected, right? Or was it the slice information?
A small fsck try brings up this:
** /dev/rdsk/c0t0d0s3
** Last Mounted on /database
** Phase 1 - Check Blocks and Sizes
INCORRECT DISK BLOCK COUNT I=11 (400 should be 208)
CORRECT?
Disk block count / I=11
this seems that the command broke filesystem information regarding its own contents, right?
When we tried to fsck -y -F ufs /dev/dsk.. various files have been recovered, but not the dbf files we are after (which are GB sized)
What can be done now? Should I try every superblock information from newfs -N ?
EDIT: new information regarding partition
newfs output showing superblock information
# newfs -N /dev/rdsk/c0t0d0s3
Warning: 2826 sector(s) in last cylinder unallocated
/dev/rdsk/c0t0d0s3: 265104630 sectors in 43149 cylinders of 48 tracks, 128 sectors
129445,6MB in 2697 cyl groups (16 c/g, 48,00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
Initializing cylinder groups:
.....................................................
super-block backups for last 10 cylinder groups at:
264150944, 264241184, 264339616, 264438048, 264536480, 264634912, 264733344,
264831776, 264930208, 265028640

How to use Perl DBI for SQL Server backup

I am trying to backup a Sql Server database via Perl DBI. Calling "backup database" via do() runs but usually does not produce a backup. Calling do() creates a backup when ODBC tracing is enabled. Calling prepare() and execute() fails.
I am using ActiveState Perl on Windows 7 Professional and Sql Server 2008 R2.
Here is a link to download source code and various logs
http://www.fileswap.com/dl/4VnYbCdk6R/ToZip.zip.html
(Click on slow download)
Here is the summary of logs
BothTraces made 3 backups but program aborted
-rwx------+ 1 SYSTEM SYSTEM 160256 Jan 16 09:39 perlEasy.bak
-rwx------+ 1 SYSTEM SYSTEM 160256 Jan 16 09:39 perlHard.bak
-rwx------+ 1 SYSTEM SYSTEM 160256 Jan 16 09:38 queryOS.bak
NoTracing made 1 backup, program aborted
-rwx------+ 1 SYSTEM SYSTEM 160256 Jan 16 10:15 queryOS.bak
DbiTrace made 1 backup, program aborted
-rwx------+ 1 SYSTEM SYSTEM 160256 Jan 16 10:19 queryOS.bak
OdbcTrace made 3 backup but program aborted
-rwx------+ 1 SYSTEM SYSTEM 159744 Jan 16 10:21 perlEasy.bak
-rwx------+ 1 SYSTEM SYSTEM 160256 Jan 16 10:21 perlHard.bak
-rwx------+ 1 SYSTEM SYSTEM 160256 Jan 16 10:21 queryOS.bak
Here's my program:
#!perl -w
#try to use DBI for SQL Server backup
#connect to database server
use v5.14; #enable modern Perl
use DBI; #database interface
my $dbHandle = DBI->connect("dbi:ODBC:Driver={SQL Server};Server=DavidZ") or die; #dbi prints a detailed error message
$dbHandle->{RaiseError} = 1; #enable failure on DBI problems; obviates the need for "or die" with every DBI call
$dbHandle->{PrintError} = 0; #don't duplicate error messages
#enable debugging
$dbHandle->trace(1);
$dbHandle->{odbc_trace} = 1; #not helpful
$dbHandle->{odbc_trace_file} = 'C:\David\dump\tracer.file'; #not helpful
#run a SQL command to verify connection, write a note to ERRORLOG
$dbHandle->do ('use master');
$dbHandle->do ("raiserror ('New run of backup.pl', 0, 0) with log");
say 'Verified database connection';
#backup commands
my $perlEasy = "backup database dz to disk='C:\\David\\dump\\perlEasy.bak'";
my $perlHard = "backup database dz to disk='C:\\David\\dump\\perlHard.bak'";
my $queryOS = "backup database dz to disk='C:\\David\\dump\\queryOS.bak'";
#make a backup via sqlcmd. this works
my $sysCmd = "sqlcmd -Q \"$queryOS\" ";
system ($sysCmd) == 0
or die "The following system command failed: $sysCmd \n";
say 'Created backup via sqlcmd';
#try to make a backup via DBI
$dbHandle->do ($perlEasy); #runs silently but does not produce a backup file
say 'Created backup the easy way';
#more complicated DBI method
my $stHandle = $dbHandle->prepare($perlHard);
$stHandle->execute(); #statement starts a backup then fails, no furter code is executed
do
{
#print dbi results
say "DBI reports $DBI::errstr";
while (my #row = $stHandle->fetchrow_array()) #recommended by someone, but makes no sense for a backup
{ say "Returned values: #row" } #recommended by someone, but makes no sense for a backup
} while ($stHandle->{odbc_more_results});
say 'Created backup the hard way';
#program completion
say 'Program completed successfully';
exit 0;
There is nothing wrong with the Perl code you show. However, the ODBC trace file shows that DBD::ODBC made these calls just before the error:
SQLPrepare backup database dz to disk='C:\David\dump\perlHard.bak'
SQLExecute returns SQL_SUCCESS_WITH_INFO and
Processed 208 pages for database 'dz', file 'dz_test' on file 1. (4035)
then a few calls for various handles to SQLErrorW
SQLRowCount returns ok and -1 for row count
SQLNumResultCols returns SQL_ERROR and ]Invalid cursor state
I cannot for the life of me see how this is an invalid cursor state (look the valid state transitions for ODBC up yourself) so I'd have to say this looks like a bug in the SQL Server ODBC driver you are using. You could try getting a newer one or use the SQL Server native client driver instead (you've probably got both already).
You can ignore the errors in your sql server log as they are correct, error 1235 is ERROR_REQUEST_ABORTED which it was.

Resources