Wanted to know if its ok to add additional data file (.ndf file) to existing database which is configured in always on availability group?
Is it recommended to do during downtime?
The databases which need additional file are very big (2TB to 5TB). The new file will be added to different disk from the rest of the data files on that database. Please advise if I can add the data file through GUI on primary replica server in production environment without downtime? Secondary server has same disks and folder paths. Will there be any issues which I need to be careful? Thank you
Related
I made a custom application that is running from several years and is full of company data.
Now I need to replicate the application for another customer, so I set up a new server then i cloned the databases and empty all the tables.
Then I made a database and file shrink.
On the SQL Server side, the databases looks empty but if I run a grep search on the database files .mdf and .log I still can find recurrence of the previous company name also in system databases.
How do I really clean a SQL Server database?
Don't use backup/restore to clone a database for distribution to different clients. These commands copy data at the physical page/extent level, which may contain artifacts of deleted data, dropped objects, etc.
The best practice for this need is to create a new database with schema and system data from scratch using T-SQL scripts (ideally source controlled). If you don't already have these scripts, T-SQL scripts for schema/data can be generated from an existing database using the SMO API via .NET code or PowerShell. Here's the first answer I found with a search that uses the Microsoft.SqlServer.Management.SMO.Scripter class. Note you can include scripts data too (insert statements) by specifying the ScriptData scripting option for desired tables.
I would like to know the steps on how to restore data dumped from an Oracle database to a SQL Server database?
Our purpose is to get data from an external Oracle database out of our organization. Due to security concern, the team that manages data source refused us to transfer data through ODBC server link. They dumped the selected tables that we need so we can restore the data in our organization. Each table's data files include .sql file to create table and constraints, a ".ctl" file, one or multiple ".ldr" files.
An extra trouble is: one of the tables contains a blob column, which stores a lot of binary data files, such as PDF etc.. This column takes most of the size of our dumped files. Otherwise I could ask them to send us data in excel directly.
Can someone give me a suggestion about what route we should take?
Either get them to export the data in an open format, or load it into an Oracle instance you have full control over. .ctl and .ldr files looks like they used the old SQL*Loader.
I'm building a process where after a match is made with a specific doc through a query in SQL Server, a file needs to be sent to a new file share on a Windows server.
Is there a way to hold file/file names (links?) in a table, and then sending them/using a command to move them from a root that holds all the docs to a specific folder?
From reviewing some earlier posts for MySQL, I understood that it's not recommended to store the actual files in SQL, due to resources usage while pulling them, I'd be happy if you could help me with the way to use such names/links in SQL Server tables for later usage.
Thank you!
Microsoft SQL Server has FileTables, which I believe fits for your case.
FileTables stores files inside file system. the files can be both inserted/deleted/updated from file system and/or SQL Server. You can attach SAN/NAS storage on windows and redirect your files there.
It also supports clustering.
Please note that, transactional use case is limited.
Reference
Edit
In order to enable filestreams for existing database
ALTER DATABASE database_name
SET FILESTREAM ( NON_TRANSACTED_ACCESS = FULL, DIRECTORY_NAME = N'directory_name' )
We have a large >40Gb filestream enabled db in Production. I would like to automatically make a backup of this db and restore it to staging, for testing deployments. The nature of our environment is such that the filestream data is > 90% of the data and I don't need it in staging.
Is there a way that I can make a backup of the db without the filestream data, as this would drastically reduce my staging disk and network requirements, while still enabling me to test a (somewhat) representative sample of prod?
I am assuming you have a fairly recent version of SQL Server. Since this is production, I am assuming you are in full recovery model.
You can’t just exclude individual tables from a backup. Backup and restore do not work like that. The only possibility i can think is to do a backup of just the file groups that do not contain the filestream. I am not 100% sure if you will be able to restore it though since I have never tried it. Spend some time researching partial backups and restoring a file group and give it a try.
You can use Generate Scripts and interface and do one of the following:
copy all SQL objects and the data (without the filestream tables) and recreate the database
copy all SQL objects without the data; create the objects in new database on the current SQL instance; copy the data that you need directly from the first database;
The first is lazy and probably will not work well with big database. The second will work for sure, but you need to sync the data by your own.
In both cases, open this interface:
Then choose all objects and all tables without the big ones:
From this option you can control the data extraction (skip or include):
I guess it will be best to script all the objects without the data. Then create a model database. You can even add some sample data in your model database. When you are changing the the production database (create new object, delete object, etc), apply these changes on your model database, too. Having such model database means you are having a copy of your production database with all supported functionalities and you can restore a this model database on every test SQL instance you want.
I have a database in SQL Server 2008 (not R2)
A third party has the job of replacing the database regularly by restoring the data in the live environment from a .bak file made in the development environment. This leads to the destruction of any user generated data in that database. I am restricted in the live environment and cannot have two databases there.
One solution I am thinking about is to write a stored procedure that could save somehow the user generated data to some kind of local file and then once the development .bak is restored a second stored procedure could write this data back from the local file.
I'm familiar with using generate scripts that will generate a .sql file so maybe something similar to this, but it needs to be generated from a sql query that contains only the user generated data (these are specific rows of certain tables that are joined together - not the best design but it's what I have to work with).
Is it possible to generate a SQL script from a SQL query? Or is there some other kind of local file storage I could use. Something like a CSV file would be ok but I'm not aware of an easy way to automate restoring this. It will need to be restored with some very specific sql queries.