This question already has answers here:
How to backup/restore a Firebird database?
(2 answers)
Closed 3 years ago.
I am working on my first WinForms application with a Firebird database shared over a network. Now I wonder how should I ensure database backup and restoring?
Up till now, my applications used embedded databases (SQLite), so I was sure that only my application accessed the database. The application itself was responsible for the backups and restores. I could simply copy the database file and that's it.
The backup was made:
Automatically at each application start
Automatically every week
Manually by user
When the user wanted to restore from backup, he could do this anytime and he could choose from any type of backup. All directly from my application.
For the new application, I've moved from SQLite to Firebird. I've chosen Firebird because the application will run with embeded database by default, but can be used also with classic server. With Firebird, I can use both embedded and server with the same database file.
The problem is that when the database will run on a server, there can be many users working with the database at the same time, so I don't know how to make backup and restore. Should I omit the backup/restore functionality in my app and let the admin make the backups on the server? Or should my app include backup and restore?
The shared database is simply totally new to me, so I don't know the best practices. Anyway, the database will be pretty small and there will be only several users working at the same time.
Thanks, Petr
Don't copy the database file, it will corrupt the database.
Firebird is a relational database server. gbak is the official application to run hot backups.
Check this out: http://firebirdfaq.org/cat5/
On a shared server, you have several options for making backups:
Use a file-backup tool that supports Microsoft Volume Shadow Copy. This will take a snapshot of your database. Firebird was designed to "survive" such backups. However, restoring such a backup is like having a power failure, but on the other hand, if you need to instruct an IT department how to do it and make surveillance, this is a serious option.
Use gbak.exe to make a copy of the database while it is in use, into a backup file. Then, make a backup of that. This is the recommended method, but in order for this to work properly, you need to inspect the exit code of gbak.exe to check that no error happened. Not all IT departments are able to do that.
However, on a shared server, you must always be paranoid: Most backups in big organizations cannot be restored, and usually the problem is that humans make mistakes. Therefore, I can recommend the third option, which is basically the combination of the first two:
Use gbak.exe to make a copy of the database into a backup file. If possible, make surveillance on the exit code of gbak.
Use a Microsoft Volume Shadow Copy enabled backup program to make a backup of both the primary database and the backup file.
This should give you a nice backup file to restore, and if gbak should have failed and noone noticed, you can fall back to the raw snapshot of the running database file. Several people must make several mistakes for this to fail.
If you are using a shared database, then you should probably take the backup/restore process out of your application, otherwise one user could corrupt or eliminate the work of another user.
You can use nbackup in C# as follows:
const String Usuario = "SYSDBA";
const String Contrasena = "masterkey";
String argumentos = String.Format(#"/C nbackup -U {0} -P {1} -B {2} BD.FDB"
, Usuario, Contrasena, (Int32) nivelRespaldo);
Process process = new Process();
process.StartInfo.WindowStyle = ProcessWindowStyle.Hidden;
process.StartInfo.FileName = "cmd.exe";
process.StartInfo.Arguments = argumentos;
process.Start();
process.Close();
In case you want to block the database while making the backup
String argumentos = String.Format(#"/C nbackup -U {0} -P {1} -L {2}"
, NombreArchivoRespaldo.Usuario, NombreArchivoRespaldo.Contrasena, Glo.NombreBaseDatos);
Don't forget to unblock
String argumentos = String.Format(#"/C nbackup -U {0} -P {1} -N {2}"
, NombreArchivoRespaldo.Usuario, NombreArchivoRespaldo.Contrasena, Glo.NombreBaseDatos);
if you want to close all the connections try:
FbConnection.ClearAllPools();
Related
I'm onboarding in a company that handles bootstrapping the database using a small SQL Server backup file. I prefer to avoid having to pollute my main Windows installation with various middleware, so I'd like to dockerize as much of this as possible.
That said, I'm not very familiar with SQL Server administration, so I'm somewhat at a loss as to how to accomplish the details, and if my thinking on this is at all correct.
I'm considering two basic approaches to this:
Make initializing the database (i.e. restoring the backup) part of the build for the database image. That is, I'd add a Dockerfile with FROM microsoft/mssql-server-windows-express to the project, restore the backup file, end up with a container image with the database ready as the end result.
The upside here is that it kind of makes sense for this to be part of the image build - if the initial backup file is updated, I only need to use docker-compose up --build to get a correct state.
The drawback is the data files should probably be in a Docker volume, and those don't really exist at container build-time. Having to remember to clear the volume before image rebuild to actually recreate a schema seems like it would kind of obviate the desired advantage.
Make a one-off tool to restore the database into a MDF+LDF stored in a Docker volume, then detach them from the server. Then use the attach_dbs environment variable to attach them in the SQL Server service that'll be running long-term.
This approach makes it obvious that the lifetime of the database files is independent from the lifetime of any given SQL Server instance.
My questions then are:
Which of those approaches is a better idea, if they're even both at all workable?
Is there a better approach to accomplish going from .bak -> working database in container?
How do I restore, using the command-line, a SQL Server database backup to a specific path - i.e. "C:\Data" within the container. (That will be mapped to a host directory using a volume.)
Its not clear exactly when you need the state of the container database to be reset, both your options sound like they'd work.
In the event that changes to the backup require the database to be rebuilt, this can be done quite efficiently in a two stage windows container:
from microsoft/mssql-server-windows-developer as db_restore
copy db.bak \.
run Invoke-Sqlcmd -Query \"restore database [temp] from disk = 'c:\\db.bak' \
with move 'Db_Data' to 'c:\\db.mdf', \
move 'Db_Log' to 'c:\\db.ldf'\"
run Invoke-Sqlcmd -Query \"shutdown with nowait\"
from microsoft/mssql-server-windows-developer
workdir \data
copy --from=db_restore \db.mdf .
copy --from=db_restore \db.ldf .
run Invoke-Sqlcmd -Query \"create database [Db] \
on primary ( name = N'Db_Data', filename = N'c:\\data\\db.mdf') \
log on (name = N'Db_Log', filename = N'c:\\data\\db.ldf') for attach\"
I'm struggling to find a suitable solution to this. I have a fairly large SQL Server 2008 Express database containing 60+ tables (many with key constraints) and a whole bunch of data.
I need to essentially copy all of these tables and the data and the constraints exactly from one database to another. I'm basically duplicating website A - to produce an exact copy (website B) on a different domain so we end up with two completely identical websites running in parallel, each with their own identical database to begin with.
Database A is up and running on website A. Database B is set up and has it's own user. I just need to get the tables and the data intact from A to B. I can them modify my web.config connection to use the log-in credentials for database B and it should work.
I've tried backing up database A and restoring to database B via Management Studio Express, but it tells me:
System.Data.SqlClient.SqlError: The backup set holds a backup of a database other than the existing 'database-B' database.
(Microsoft.SqlServer.Smo)
I've also tried right clicking database A in Management Studio Express and going to Tasks > Generate scripts. But when I do this and run the SQL scripts on database B I get a whole load of errors to do with foreign keys etc as it imports the content. It seems like it's doing the right thing, but can't handle the different keys/relationships.
So does anyone know of a simple, sure-fire way of getting my data 100% exact and intact from database A to database B?
I think I used SQL Server Database Publishing Wizard to do something like this about 5 years ago, but that product seems to be defunct now - I tried to install it and it wanted me to regress my version of SQL Server to 2005, so I'm not going there!
Don't use the UI for this. If you're not familiar with the various aspects of BACKUP/RESTORE the UI is just going to lead you down the wrong path for a lot of options. The simplest backup command would be:
BACKUP DATABASE dbname TO DISK = 'C:\some folder\dbname.bak' WITH INIT;
Now to restore this as a different database, you need to know the file names because it will try to put the same files in the same place. So if you run the following:
EXEC dbname.dbo.sp_helpfile;
You should see output that contains the names and paths of the data and log files. When you construct your restore, you'll need to use these, but replace the paths with the name of the new database, e.g.:
RESTORE DATABASE newname FROM DISK = 'C\some folder\dbname.bak'
WITH MOVE 'dbname' TO 'C:\path_from_sp_helpfile_output\newname_data.mdf',
MOVE 'dbname_log' TO 'C:\path_from_sp_helpfile_output\newname_log.ldf';
You'll have to replace dbname and newname with your actual database names, and also some folder and C:\path_from_sp_helpfile_output\ with your actual paths. I can't get more specific in my answer unless I know what those are.
** EDIT **
Here is a full repro, which works completely fine for me:
CREATE DATABASE [DB-A];
GO
EXEC [DB-A].dbo.sp_helpfile;
Partial results:
name fileid filename
-------- ------ ---------------------------------
DB-A 1 C:\Program Files\...\DB-A.mdf
DB-A_log 2 C:\Program Files\...\DB-A_log.ldf
Now I run the backup:
BACKUP DATABASE [DB-A] TO DISK = 'C:\dev\DB-A.bak' WITH INIT;
Of course if the clone target (in this case DB-B) already exists, you'll want to drop it:
USE [master];
GO
IF DB_ID('DB-B') IS NOT NULL
BEGIN
ALTER DATABASE [DB-B] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
DROP DATABASE [DB-B];
END
GO
Now this restore will run successfully:
RESTORE DATABASE [DB-B] FROM DISK = 'C:\dev\DB-A.bak'
WITH MOVE 'DB-A' TO 'C:\Program Files\...\DB-B.mdf',
MOVE 'DB-A_log' TO 'C:\Program Files\...\DB-B_log.ldf';
If you are getting errors about the contents of the BAK file, then I suggest you validate that you really are generating a new file and that you are pointing to the right file in your RESTORE command. Please try the above and let me know if it works, and try to pinpoint any part of the process that you're doing differently.
I realize this is an old question, but I was facing the same problem and I found that the UI was easier and faster than creating scripts to do this.
I believe Dan's problem was that he created the new database first and then tried to restore another database into it. I tried this as well and got the same error. The trick is to not create the database first and name the database during the "Restore Database" process.
The following article is somewhat useful in guiding you through the process:
http://msdn.microsoft.com/en-us/library/ms186390(v=sql.105).aspx
In SQL Server 2008 I can attach databases located only in its predefined folder (C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA). On may occasions, especially when I read a book, I need to attach test database from desktop rather then copy each database every time I need it, but SQL Server does not allow me to access desktop.
Any workaround to solve this issue?
It's probably a matter of granting the account running the SQL service appropriate permissions to your desktop folder (C:\Documents and Settings\YourLogin\Desktop). But, rather than use a location like Desktop that is specific to your login and possibly inaccessible to the account running the SQL service, why not use a common holding location for these files? Something like C:\AdHocDBs or whatever you want to call it.
When a database file (data or log) is first created, it is (of course) located in a specific drive and folder. When a backup is created, this information is stored as part of the backup. A database RESTORE command will assume that the database is to be restored in the exact same location, unless instructed otherwise. To do this, in the RESTORE command under the "with" option, you must include the "move" option. It looks something like this:
RESTORE ...
with
move '<logcalFileName>' to 'physicalFileName'
,move '<logcalLogFileName>' to 'physicalLogFileName'
One move must be included for each file to be so moved, so you usually end up with at least two of these clauses. The tricky part is that you must know the database files' logical names. These can be found via sp_helpFile on an attached database, and
RESTORE FILELISTONLY
from disk = '<backupFile>'
On an existing backup.
(I'm sure all this can be done somehow with the SSMS backup/restore GUIs. I switched over to TSQL-based scripts years ago, to provide quick and flexible access to all the features wrapped in the backup and restore commands.)
I use a Wordpress plugin called 'Shopp'. It stores product images in the database rather than the filesystem as standard, I didn't think anything of this until now.
I have to move server, and so I made a backup, but restoring the backup is proving a horrible task. I need to restore one table called wp_shopp_assets which is 18MB.
Any advice is hugely appreciated.
Thanks,
Henry.
For large operations like this it is better to go to command line. phpMyAdmin gets tricky when lots of data is involved because there are all sorts of timeouts in PHP that can trip it up.
If you can SSH into both servers, then you can do a sequence like the following:
Log in to server1 (your current server) and dump the table to a file using "mysqldump" --- mysqldump --add-drop-table -uSQLUSER -pPASSWORD -h
SQLSERVERDOMAIN DBNAME TABLENAME > BACKUPFILE
Do a secure copy of that file from server1 to server2 using "scp" ---
scp BACKUPFILE USER#SERVER2DOMAIN:FOLDERNAME
Log out of server 1
Log into server 2 (your new server) and import that file into the new DB using "mysql" --- mysql -uSQLUSER -pPASSWORD DBNAME < BACKUPFILE
You will need to replace the UPPERCASE text with your own info. Just ask in the comments if you don't know where to find any of these.
It is worthwhile getting to know some of these command line tricks if you will be doing this sort of admin from time to time.
try HeidiSQL http://www.heidisql.com/
connect to your server and choose the database
go to menu "import > Load sql file" or simply paste the sql file into the sql tab
execute sql (F9)
HeidiSQL is an easy-to-use interface
and a "working-horse" for
web-developers using the popular
MySQL-Database. It allows you to
manage and browse your databases and
tables from an intuitive Windows®
interface.
EDIT: Just to clarify. This is a desktop application, you will connect to your database server remotely. You won't be limited to php script max runtime, or upload size limit.
use bigdupm.
create a folder on your server witch is not easy to guess like "BigDump_D09ssS" or w.e
Download the http://www.ozerov.de/bigdump.php importer file and add them to that directory after reading the instructions and filling out your config information.
FTP The .SQL File to that folder along side the bigdump script and go to your browser and navigate to that folder.
Selecting the file you uploaded will start importing the SQL is split chunks and would be a much faster method!
Or if this is an issue i reccomend the other comment about SSH And mysql -u -p -n -f method!
Even though this is an old post I would like to add that it is recommended to not use database-storage for images when you have more than like 10 product(image)s.
Instead of exporting and importing such a huge file it would be better to transfer the Shopp installation to file-storage for images before transferring.
You can use this free plug-in to help you. Always backup your files and database before performing this action.
What I do is open the file in a code editor, copy and paste into a SQL window within phpmyadmin. Sounds silly, but I swear by it via large files.
Example, I have 2 server, each server using a copy of linux cpanel.
I first install free trial script into subfolder reside in Server A.
eg:
sample.com/service/username1
sample.com/service/username2
sample.com/service/username3
Then when people decided to upgrade, I gonna move all files to theirnewdomain.com, including database etc into Server B (another copy of cpanel).
In such case, what is the best suggestion to perform this kind of upgrade?
Move files from serverA to ServerB, possible to automate it? Zip it into a common place for Server B to pickup?
Moving mysql db created in ServerA's cpanel into ServerB's cpanel, best way? Recreate a copy into ServerB, then dump data into the fresh copy?
Seems like not possible to directly modify the db pointing in cpanel database, because its now 2 server, each server got a copy of cpanel. Things are separated.
Note, this process need a few task to be done in the process. Eg: update to centralized database of the file moving status, domain creation status, db creation status, and etc...
Any idea?
How actually other service that host in multi server work?
In general, you probably want to follow the pattern of running a backup script and then extracting it on the new server. You may find some hints in this description of a manual process.