I have a mongo database where I took a dump of that database from source environment and restoring the database in target environment but the size is changing from dump to restore. So, I was checking for ways to compare db.stats() for that DB before dumping and restoring on both environments. If there is mismatch with db size throw an exception or message stating its diff size using a bash script
Can someone let me know ways to check this approach
Related
I'm currently using MariaDB 10.3.17 on CentOS 8 and I'm trying to restore a backup of a specific database. I made 2 dummy databases named test and test_restore. I updated test_restore then created an incremental backup with --databases="mysql test_restore"
I ran:
# mariabackup --prepare --target-dir=<dir>/mariadb/backup/2020-02/18_10h_full/
# mariabackup --prepare --target-dir=<dir>/mariadb/backup/2020-02/18_10h_full/ --incremental-dir=<dir>/mariadb/backup/2020-02/18_10h10m_inc/
# mariabackup --copy-back --target-dir=<dir>/mariadb/backup/2020-02/18_10h_full/
After that, I lost all data on my test db but kept my updated test_restore db
I can do a full backup and restore with the incremental back up of ALL databases altogether but thats gonna take a long time.
I might be wrong, but MariaDB's blog article How to Restore a Single Database from MariaDB Backup seems very suitable here. I'm linking it, because it doesn't make sense to me to simply copy all of the instructions and hints.
Hi i have a pretty big database running on Postgres 9.3. I backup it up using pg_dump with compression. I am worried that these backups may be corrupted or that i won't be able to restore it properly (using pg_restore), or that restored database is could be corrupted. The database I backup is in constant use so it's pretty hard (if not impossible) to check if restored database is working correctly by comparing rows (and to be honest i don't think that such test would give a result that's meaningful). Is there any way to check integrity of dump file or a restored database ? I read that postgres 9.3 supports checksumming db files, but i don't see how that would help my case.
Corruption is usually in the form of bad data that won't restore (character set weirdness and the like). I think the best you can do is automatically restore to a test db. If that process succeeds, you are likely ok.
Following a SAN issue a SQL database was marked Suspect. Due to the extent of the inconsistencies recovery was from a valid backup & log backups. No other system or other user databases had issues and CHECKDBs succeeded. The recovered database also had a successful CHECKDB and the application was re-enabled.
However the daily backup have been failing on the problem database. CHECKDB continues to succeed with no errors. Full, Copy_Only backups produce the same error (have also tried continue_after_error)
Msg 3203, Level 16, State 1, Line 3
Read on "mydb.mdf" failed:
23(failed to retrieve text for this error. Reason 15105)
Msg 3203, Level 16, State 1, Line 3
BACKUP DATABASE is terminating abnormally.
I also see in the System Event log
The device, \Device\Harddisk2\DR2, has a bad block.
The server itself has since been restarted and SQL Server came back online with no errors. CHECKDB continues to report no errors for any of the databases - but the position is worsening with no valid backup now for over a week.
Other forums suggest this error may be due to file access/permissions or not enough disk space for the backup to complete but this is not the case, having tried backing up to several different locations under different credentials with the same outcome.
I’m putting together a process to export all the DB objects and bulk copy all the data out into a clean database. Another option I’ve considered is detaching/stopping sql and copying the mdf,ndf, ldf files to another server, but reluctant to stop SQL Server at the moment without securing the data first.
Would welcome any thoughts, further checks I might be able to perform whilst the DB is online to establish what the bad block might relate to.
Screenshot 1 - shows running the backup gets 70% through.1
Just to say that we have concluded the mdf file is beyond repair. To share the scenario again
With the bad sector in the mdf file
the T-Log backups succeeded
the Database was still accessible/functioning
and CHECKDB were appearing good
However
* Full & Diff backups failed
* the MDF file could not be copied when the DB was detached
* the DB could still be be reattached in situ
Due to some poor file management & the delayed identification of this whole issue
the log chain become broken (due to limited log backups retention)
the only solution was to restoring an old backup and painful copy out of the data
I'm trying to move a fairly large database (50GB) to Azure. I am running this command on the local Sql Server to generate a bacpac I can upload.
SqlPackage.exe /a:Export /ssn:localhost /sdn:MDBILLING /su:sa /sp:SomePassword /tf:"D:\test.bacpac"
The export does not print any errors and finishes with "Successfully exported database and saved it to file 'D:\test.bacpac'."
When I look at the bacpac in the file system, it always comes out to be 3.7GB. There's no way a 50GB database can be compressed that small. I upload it to Azure regardless. The package upload succeeds, but when I query the Azure database most of the tables return 0 rows. It's almost as if the bacpac does not contain all my database's data.
Are there any known limitations with this export method? Database size, certain data types, etc?
I tried using the 64bit version of SqlPackage reading that some experienced out of memory issues on large databases, but I wasn't getting this error or any error for that matter.
UPDATE/EDIT: I made some progress after ensuring that the export is transactionally consistent by restoring a backup and then extracting a bacpac from that. However, now I have run into a new error when uploading to Azure.
I receive the following message (using S3 database):
Error encountered during the service operation. Data plan execution failed with message One or more errors occurred. One or more errors occurred. One or more errors occurred. One or more errors occurred. XML parsing: Document parsing required too much memory One or more errors occurred. XML parsing: Document parsing required too much memory
The problem is resolved. My issues were two-fold.
First, because bacpac operations are not transactionally consistent, I had to restore from backup and make a bacpac out of the restored database. This ensured users were not adding rows while the bacpac was being generated.
Second issue was an XML column in my database. The table has roughly 17 million rows and of those rows roughly 250 them had really large xml documents stored in them (200000+ characters). Removing those 250 rows and them reimporting solved my problems. I really don't think it was the size of the xml document that Azure had an issue with. I think those large documents contained special characters the xml parser didn't like.
It's unclear to me how Sql Server allows unparseable xml to get into my database in the first place, but that was the other issue.
I am backing up a database whose size is about 190 GB. I want to back up the database to a local file. This is the command I am using:
mysqldump -u root -p tradeData > /db_backup/tradeData.sql
I have enough space on my machine. I tried a bunch of times and got no errors, but I am always getting a result file whose size is around 122GB.
Does anyone have experience with backing up large databases? My machine is a Linux one.
Using information like the SQL query here won’t give you a one-to-one connection between your local DB dump and what is actually in the system. Actual DBs have indexes and data that only exist when the DB is actually a DB in the database. As RolandoMySQLDBA explains:
From the dump file size, it is hard to judge because the combined
total size of data pages and index pages maybe far less that the size
of ibdata1 the dump was created from.
So my guess is your database includes InnoDB tables among other things that bloat the DB when compared to a bare dump.