I have trying to do Data archival in my postgres, i have successfully taken a backup on the same server but on different folder. by adding archive_command
archive_command = 'test ! -f /usr/share/site1/wal/%f && cp %p /usr/share/site1/wal/%f'
But now i have another server 18.15.12.101 and i need to use the same location of folder into another server.
Can some one let me know how to do it. i tried this but failed.
archive_command = 'test ! -f 18.15.12.101/usr/share/site1/wal/%f && cp %p 18.15.12.101/usr/share/site1/wal/%f'
Related
here is my problem:
I have a server with DB2 v10.5.9 and v11.1.4.4 installed. I updated the server, instances and dbs that were there at the time and it all wirks great. Now I have added a new volume to the system that contains a DB in version 10.5.9 and I need to relocate it and upgrade it to v11.1.4.4. The relocate within v10 is not a problem but I can't do the relocate in the v11 with that DB. A command "db2 catalog db on /system/metalog/" doesn't work either.
I have the relocate.cfg file that should work, but all my commands go against the v11 of course since that is the default db now.
My idea was:
db2 catalog db testdb on /system/testdb/metalog/
db2 upgrade db testdb
What must I do to do the relocate and then the upgrade so I can use that DB in v11.
This is what I did to get the correct folder structur:
mkdir -p data/testdb/NODE0000
mkdir -p metalog/testdb
mkdir -p /db2/backup/testdb/testdb1/archivlogfiles/
mv metalog/olddb/NODE0000/ metalog/testdb/
mkdir data/testdb/NODE0000/TESTDB1
mv data/testdb/NODE0000/OLDDB1/ data/testdb/NODE0000/TESTDB1/
Here is the relocate.cfg
DB_NAME=OLDDB1,TESTDB1
DB_PATH=/db2/olddb/data/olddb1/metalog/,/db2/testdb/data/testdb/testdb1/metalog
INSTANCE=olddb,testdb
STORAGE_PATH=/db2/olddb/data/olddb1/data/,/db2/testdb/data/testdb/testdb1/data/
LOG_DIR=/db2/olddb/data/olddb1/metalog/olddb/NODE0000/SQL00001/LOGSTREAM0000/,/db2/testdb/data/testdb/testdb1/metalog/testdb/NODE0000/SQL00001/
LOGARCHMETH1=DISK:/db2/backup/testdb/testdb1/archivlogfiles/
CONT_PATH=/db2/olddb/data/olddb1/data/olddb1_TS_32PART.dbf,/db2/testdb/data/testdb/testdb1/data/olddb1_TS_32PART.dbf
This is the relocate log:
Logging started at Mon May 4 11:29:10 2020
Input file: /db2/testdb/scripts/relocate_olddb1.cfg
Opening configuration file.
-> File: "/db2/testdb/scripts/relocate_olddb1.cfg"
Changes requested:
-> Database name:
Old: "olddb1"
New: "testdb1"
-> Database path:
Old: "/db2/olddb/data/olddb1/metalog/"
New: "/db2/testdb/data/testdb/testdb1/metalog/"
-> Instance name:
Old: "olddb"
New: "testdb"
-> Node number: 0
-> Log directory:
Old: "/db2/testdb/data/testdb/testdb1/metalog/testdb/NODE0000/SQL00001/"
New: "/db2/testdb/data/testdb/testdb1/metalog/testdb/NODE0000/SQL00001/"
-> Container paths:
Old: "/db2/olddb/data/olddb1/data/olddb1_TS_32PART.dbf"
New: "/db2/testdb/data/testdb/testdb1/data/olddb1_TS_32PART.dbf"
-> Storage paths:
Old: "/db2/olddb/data/olddb1/data"
New: "/db2/testdb/data/testdb/testdb1/data"
SD mode: no
** PASS #1: Verifying Files and Structures **
Opening the local directory file.
-> File: "/db2/testdb/data/testdb/testdb1/metalog/testdb/NODE0000/sqldbdir/sqldbdir"
Reading directory header.
Reading hash offset table.
Reading 1 entries into memory.
Opening the global log control file
ERROR: Unable to open global log control file.
Path = "/db2/testdb/data/testdb/testdb1/metalog/testdb/NODE0000/SQL00001/"
DB2 RC = 0x801008dc
ERROR: Failed to initialize member configuration information.
DB2 RC = 0x801008dc
Exiting with RC = 1.
Logging stopped at Mon May 4 11:29:10 2020
My db2level is set to db2 v11.1.4.4.
Thank you for your help.
I have to restore a database and am following this official documentation where I follow two steps:
- List the files
- Run the Restore command with respect to the files aforementioned.
However, I am facing "already claimed" error.
I tried to use different names but it is not possible since the backup has certain files. I also tried other answers across different domains, all have GUI.
The first command that I ran was:
sudo docker exec -it sql1 /opt/mssql-tools/bin/sqlcmd -S localhost \
-U SA -P '<YourStrong#Passw0rd>' \
-Q 'RESTORE FILELISTONLY FROM DISK = "/var/opt/mssql/backup/us_national_statistics.bak"' \
| tr -s ' ' | cut -d ' ' -f 1-2
I got the following output:
LogicalName PhysicalName
-------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
us_national_statistics C:\Program
us_national_statistics_log C:\Program
Then, as per the documentation, I ran this command:
sudo docker exec -it sql1 /opt/mssql-tools/bin/sqlcmd \
-S localhost -U SA -P '<YourStrong#Passw0rd>' \
-Q 'RESTORE DATABASE US_NATIONAL FROM DISK = "/var/opt/mssql/backup/us_national_statistics.bak" WITH MOVE "us_national_statistics" TO "C:\Program", MOVE "us_national_statistics_log" TO "C:\Program"'
Here, I get the following error:
Msg 3176, Level 16, State 1, Server 0a6a6aac7476, Line 1
File 'C:\Program\New' is claimed by 'us_national_statistics_log'(2) and 'us_national_statistics'(1). The WITH MOVE clause can be used to relocate one or more files.
Msg 3013, Level 16, State 1, Server 0a6a6aac7476, Line 1
RESTORE DATABASE is terminating abnormally.
I expect the database to be restored.
You can't restore to C:\Program for multiple reasons. That's not a full path (you seem to have lost the string after the first space in Program Files); the data and log can't both be put in the same file; you don't typically have write access to the root of any drive; and C:\ is not valid in Docker or Linux.
You need the LogicalName, but you should not be using the PhysicalName directly, either in the case where you are restoring to Docker or Linux, or in the case where you are restoring a database alongside an existing copy that you want to keep, or in the case where you are restoring a database to a different instance (which will more than likely have a different data folder structure).
Try:
RESTORE DATABASE US_NATIONAL_COPY
FROM DISK = "/var/opt/mssql/backup/us_national_statistics.bak"
WITH REPLACE, RECOVERY,
MOVE "us_national_statistics" TO "/var/opt/mssql/data/usns_copy.mdf",
MOVE "us_national_statistics_log" TO "/var/opt/mssql/data/usns_copy.ldf";
is there any direct utility available to purge older logs from GP database, If i do it manually it is taking lot of time as there are 100+ segments, i have to go to each server and delete the logs files manually.
Other details: GP version - 4.3.X.X(Software Only Solution)
Cluster Config- 2+10
Thanks
I suggest you create a cron job and use gpssh to do this. For example:
gpssh -f ~/host_list -e 'for i in $(find /data/primary/gpseg*/pg_log/ -name "*.csv" -ctime +60); do rm $i; done'
This will remove files in pg_log on all segments that are over 2 months old. Of course, you should test this and make sure the path to pg_log is correct.
I want to do a Backup from a Firebird Database. In the documentation I read i should do it with:
/opt/firebird/bin/nbackup -B 0 /home/server/daten/DB.fdb DB19082014.nbk
This work. I have a file DB19082014.nbk. This I copy to my computer, and now I would Restore it with:
/opt/firebird/bin/nbackup -R /home/server/daten/DB.fdb db19082014.nbk
But now I get the error:
I/O error during "open" operation for file "/home/server/daten/DB.fdb.delta"
Error while trying to open file
null
But I don't have a .delta File. Not on my System and not on the System I do the Backup. Knows anybody where or how I can create a empty .delta File? To get the database work?
Thank You
Solution:
The Backup File must be unlocked with:
nbackup -F <database>
Solution: The Backup File must be unlocked with:
nbackup -F <database>
I want to back up a database using this code
sqlcmd -S servername -Q "BACKUP DATABASE [DBName] TO DISK = 'C:\backup.bak'"
It works. But if the backup file already exists, the data gets appended to the file instead of replacing the file. Every time I call BACKUP DATABASE the file gets bigger.
Is there an option for BACKUP DATABASE to force a replace?
sqlcmd -S servername -Q "BACKUP DATABASE [DBName] TO DISK = 'C:\backup.bak' WITH INIT"
INIT does the trick. From MSDN:
INIT Specifies that all backup sets should be overwritten
WITH INIT is not enough. Should be WITH INIT, SKIP these days. Docs
Explanation: INIT overwrites only if some conditions are met. SKIP instructs to ignore those conditions.
BACKUP DATABASE SQ_P TO DISK='D:\Data Backup\SQ_P.bak' with init;
where SQ_P is the Database Name