Multiple data files for a single database in different drives - database

We are receiving thousands of records daily in our lets say 'abc' database and the size of mdf file has now increased to 800 GB. The drive where primary mdf file is kept is full now due to that mdf file. We cannot add additional storage to that drive due to cost constraint.
Can we have multiple data files for a database in different drives? Would there be any issue in doing this?

Related

Backup specific tables from production into a .bak file (not BCP) in SQL Server

I need to backup a couple of tables from my prod database.
Each table has 250 million rows.
The complete backup is almost 1TB
I don't want the full backup due to space (Needs double space for backup & restore)
I can't use BCP due to volume and heavy inserts involved.
SQL Server backups (.bak) are always all or nothing.
The only "selection" you can get is if you had multiple filegroups - then you could back up just one filegroup and ignore all others.
But you cannot selectively backup just a handful of tables into a .bak file. There's just no feature for that.
Create an auxiliary database, select from your 250mil tables into corresponding columnstores (in mil batches) , backup the aux db and you have your data "suitcase"

Merge replication with Filestream

I have the scenario where I am using a FILESTREAM enabled table to upload and download files from.
I need the files available in two instances, database A and database B. Files can be uploaded to both A and B and must be synced to the other party. There is no restriction on uniqueness, identical files might be uploaded to the databases. Note the table to be replicated is used only for the files and nothing else.
How reliable is a Merge replication in this case? Some of the files can be up to 2GB in size. Is the replication revertible, i.e. if it fails midway while streaming the files to the other database will all the changes, caused by the replication, be rolled back?

MDF file is keeping deleted data

How to get rid of deleted data (names, addresses etc) which no longer exist at database but .MDF file still has them.
In my country there is a law obligating me to get rid of these data but I can't.
I've switched database (SQL SERVER 2005) to Simple recovery model, performed full backup and shrinked database and files (data file and log file).
The data still pesist in data file (MDF).
Is the table a heap?
Heaps do not physically remove the data when you delete rows. To reclaim the space create a clustered index on the table.

Database to manage million log files

I wish to have large number (e.g. million) of log files on a system. But OS has limit on opened files. It is not efficient to create million files in single folder.
Is there ready solution, framework or database that will create log files and append data to log files in efficient manner?
I can imagine various techniques to optimize management of large number of log files but there might something that does that out of box.
e.g. I wish that log file was re-created every day or when it reach 50MB. Old log files must be stored. e.g uploaded to Amazon S3.
I can imagine that log database writes all logs in single file but later processes it appends records in millions of files.
May be there is special file system that is good for such task. I can't find anything. I am sure there might be solution.
PS I wish to run logging on single server. I say 1 million because it is more then default limit on opened files. 1 million files 1MB is 1TB and it could be stored on regular harddrive.
I look for existing solution before I will write my own. I am sure there might a set of logging servers. I just do not know how to search for them.
I would start thinking of a Cassandra of Hadoop as a store for log data and eventually if you want these data in a form of a files write a procedure that will make a select on one of these databases and place them in formatted files.

MDF and LDF Files size

I was wondering if there was any recommended max size for MDF and/or LDF Files for an SQL server instance.
For example, if I want to create a 400 GBytes Database, is there a rule to help me decide how many mdf files I should create ? or should I just go ahead and create a single gigantic 400Gbytes mdf file?
If so is this going to somehow affect the database performances ?
What you do will depend on your disk system. You need to figure out what type of transactions your application will be performing and configure your disks to be able to handle those transactions. The I/O system is the bottleneck in most systems, so this will definitely affect performance. Isolate sequential I/O's and distribute random I/O's.
Some guidelines from a SQL 2000 tuning book:
Isolate the transaction log on it's own RAID 1 or RAID 10 drive.
Configure enough drives in your RAID array or split database into filegroups on separate disks so you can keep the volumes at fewer than 125 I/Os per second(that number may be outdated).
Configure data file volumes as RAID 5 if the transactions are expected to be mostly read.
Configure data volumes as RAID 10 if more than 10% writes are expected.

Resources