here is my problem:
I have a server with DB2 v10.5.9 and v11.1.4.4 installed. I updated the server, instances and dbs that were there at the time and it all wirks great. Now I have added a new volume to the system that contains a DB in version 10.5.9 and I need to relocate it and upgrade it to v11.1.4.4. The relocate within v10 is not a problem but I can't do the relocate in the v11 with that DB. A command "db2 catalog db on /system/metalog/" doesn't work either.
I have the relocate.cfg file that should work, but all my commands go against the v11 of course since that is the default db now.
My idea was:
db2 catalog db testdb on /system/testdb/metalog/
db2 upgrade db testdb
What must I do to do the relocate and then the upgrade so I can use that DB in v11.
This is what I did to get the correct folder structur:
mkdir -p data/testdb/NODE0000
mkdir -p metalog/testdb
mkdir -p /db2/backup/testdb/testdb1/archivlogfiles/
mv metalog/olddb/NODE0000/ metalog/testdb/
mkdir data/testdb/NODE0000/TESTDB1
mv data/testdb/NODE0000/OLDDB1/ data/testdb/NODE0000/TESTDB1/
Here is the relocate.cfg
DB_NAME=OLDDB1,TESTDB1
DB_PATH=/db2/olddb/data/olddb1/metalog/,/db2/testdb/data/testdb/testdb1/metalog
INSTANCE=olddb,testdb
STORAGE_PATH=/db2/olddb/data/olddb1/data/,/db2/testdb/data/testdb/testdb1/data/
LOG_DIR=/db2/olddb/data/olddb1/metalog/olddb/NODE0000/SQL00001/LOGSTREAM0000/,/db2/testdb/data/testdb/testdb1/metalog/testdb/NODE0000/SQL00001/
LOGARCHMETH1=DISK:/db2/backup/testdb/testdb1/archivlogfiles/
CONT_PATH=/db2/olddb/data/olddb1/data/olddb1_TS_32PART.dbf,/db2/testdb/data/testdb/testdb1/data/olddb1_TS_32PART.dbf
This is the relocate log:
Logging started at Mon May 4 11:29:10 2020
Input file: /db2/testdb/scripts/relocate_olddb1.cfg
Opening configuration file.
-> File: "/db2/testdb/scripts/relocate_olddb1.cfg"
Changes requested:
-> Database name:
Old: "olddb1"
New: "testdb1"
-> Database path:
Old: "/db2/olddb/data/olddb1/metalog/"
New: "/db2/testdb/data/testdb/testdb1/metalog/"
-> Instance name:
Old: "olddb"
New: "testdb"
-> Node number: 0
-> Log directory:
Old: "/db2/testdb/data/testdb/testdb1/metalog/testdb/NODE0000/SQL00001/"
New: "/db2/testdb/data/testdb/testdb1/metalog/testdb/NODE0000/SQL00001/"
-> Container paths:
Old: "/db2/olddb/data/olddb1/data/olddb1_TS_32PART.dbf"
New: "/db2/testdb/data/testdb/testdb1/data/olddb1_TS_32PART.dbf"
-> Storage paths:
Old: "/db2/olddb/data/olddb1/data"
New: "/db2/testdb/data/testdb/testdb1/data"
SD mode: no
** PASS #1: Verifying Files and Structures **
Opening the local directory file.
-> File: "/db2/testdb/data/testdb/testdb1/metalog/testdb/NODE0000/sqldbdir/sqldbdir"
Reading directory header.
Reading hash offset table.
Reading 1 entries into memory.
Opening the global log control file
ERROR: Unable to open global log control file.
Path = "/db2/testdb/data/testdb/testdb1/metalog/testdb/NODE0000/SQL00001/"
DB2 RC = 0x801008dc
ERROR: Failed to initialize member configuration information.
DB2 RC = 0x801008dc
Exiting with RC = 1.
Logging stopped at Mon May 4 11:29:10 2020
My db2level is set to db2 v11.1.4.4.
Thank you for your help.
Related
I have trying to do Data archival in my postgres, i have successfully taken a backup on the same server but on different folder. by adding archive_command
archive_command = 'test ! -f /usr/share/site1/wal/%f && cp %p /usr/share/site1/wal/%f'
But now i have another server 18.15.12.101 and i need to use the same location of folder into another server.
Can some one let me know how to do it. i tried this but failed.
archive_command = 'test ! -f 18.15.12.101/usr/share/site1/wal/%f && cp %p 18.15.12.101/usr/share/site1/wal/%f'
$ /usr/pgsql-12/bin/pg_upgrade \
> -b /usr/pgsql-1
pgsql-10/ pgsql-12/
> -b /usr/pgsql-10/bin/ \
> -B /usr/pgsql-12/bin/ \
> -d /var/lib/pgsql/1
10/ 12/
> -d /var/lib/pgsql/10/data/ \
> -D /var/lib/pgsql/12/data/ \
> --check
Performing Consistency Checks
-----------------------------
Checking cluster versions ok
Checking database user is the install user ok
Checking database connection settings ok
Checking for prepared transactions ok
Checking for system-defined composite types in user tables ok
Checking for reg* data types in user tables ok
Checking for contrib/isn with bigint-passing mismatch ok
Checking for tables WITH OIDS ok
Checking for invalid "sql_identifier" user columns ok
Checking for presence of required libraries fatal
Your installation references loadable libraries that are missing from the
new installation. You can add these libraries to the new installation,
or remove the functions using them from the old installation. A list of
problem libraries is in the file:
loadable_libraries.txt
Failure, exiting
[postgres#localhost ~]$ cat loadable_libraries.txt
could not load library "$libdir/ltree": ERROR: could not access file "$libdir/ltree": No such file or directory
Database: ___
Database: ___
could not load library "$libdir/pg_trgm": ERROR: could not access file "$libdir/pg_trgm": No such file or directory
Database: ___
Database: ___
could not load library "$libdir/uuid-ossp": ERROR: could not access file "$libdir/uuid-ossp": No such file or directory
Database: ___
Database: ___
Valid steps for upgrade from postgres 10 to 12 will be highly appreciated. As I didn't found any highly reviewed link that is complete.
I am currently following with this link: https://www.postgresql.r2schools.com/how-to-upgrade-from-postgresql-11-to-12/. Replaced 10 for 11 in every command.
Thanks in Advance.
You can run the command:
sudo dnf install postgresql12-contrib
I have to restore a database and am following this official documentation where I follow two steps:
- List the files
- Run the Restore command with respect to the files aforementioned.
However, I am facing "already claimed" error.
I tried to use different names but it is not possible since the backup has certain files. I also tried other answers across different domains, all have GUI.
The first command that I ran was:
sudo docker exec -it sql1 /opt/mssql-tools/bin/sqlcmd -S localhost \
-U SA -P '<YourStrong#Passw0rd>' \
-Q 'RESTORE FILELISTONLY FROM DISK = "/var/opt/mssql/backup/us_national_statistics.bak"' \
| tr -s ' ' | cut -d ' ' -f 1-2
I got the following output:
LogicalName PhysicalName
-------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
us_national_statistics C:\Program
us_national_statistics_log C:\Program
Then, as per the documentation, I ran this command:
sudo docker exec -it sql1 /opt/mssql-tools/bin/sqlcmd \
-S localhost -U SA -P '<YourStrong#Passw0rd>' \
-Q 'RESTORE DATABASE US_NATIONAL FROM DISK = "/var/opt/mssql/backup/us_national_statistics.bak" WITH MOVE "us_national_statistics" TO "C:\Program", MOVE "us_national_statistics_log" TO "C:\Program"'
Here, I get the following error:
Msg 3176, Level 16, State 1, Server 0a6a6aac7476, Line 1
File 'C:\Program\New' is claimed by 'us_national_statistics_log'(2) and 'us_national_statistics'(1). The WITH MOVE clause can be used to relocate one or more files.
Msg 3013, Level 16, State 1, Server 0a6a6aac7476, Line 1
RESTORE DATABASE is terminating abnormally.
I expect the database to be restored.
You can't restore to C:\Program for multiple reasons. That's not a full path (you seem to have lost the string after the first space in Program Files); the data and log can't both be put in the same file; you don't typically have write access to the root of any drive; and C:\ is not valid in Docker or Linux.
You need the LogicalName, but you should not be using the PhysicalName directly, either in the case where you are restoring to Docker or Linux, or in the case where you are restoring a database alongside an existing copy that you want to keep, or in the case where you are restoring a database to a different instance (which will more than likely have a different data folder structure).
Try:
RESTORE DATABASE US_NATIONAL_COPY
FROM DISK = "/var/opt/mssql/backup/us_national_statistics.bak"
WITH REPLACE, RECOVERY,
MOVE "us_national_statistics" TO "/var/opt/mssql/data/usns_copy.mdf",
MOVE "us_national_statistics_log" TO "/var/opt/mssql/data/usns_copy.ldf";
I start my mariadb with
/etc/init.d/mysql start
Then i get
starting MariaDB database server mysqld
No more messages.
When i call
service mysql status
i get
MariaDB is stopped
Why ?
my my.cnf is:
# Example mysql config file.
[client-server]
socket=/tmp/mysql-dbug.sock
port=3307
# This will be passed to all mysql clients
[client]
password=XXXXXX
# Here are entries for some specific programs
# The following values assume you have at least 32M ram
# The MySQL server
[mysqld]
temp-pool
key_buffer_size=16M
datadir=/etc/mysql/data
loose-innodb_file_per_table
[mariadb]
datadir=/etc/mysql/data
default-storage-engine=aria
loose-mutex-deadlock-detector
max- connections=20
[mariadb-5.5]
language=/my/maria-5.5/sql/share/english/
socket=/tmp/mysql-dbug.sock
port=3307
[mariadb-10.1]
language=/my/maria-10.1/sql/share/english/
socket=/tmp/mysql2-dbug.sock
[mysqldump]
quick
max_allowed_packet=16M
[mysql]
no-auto-rehash
loose-abort-source-on-error
Thank you for your help.
If your SELinux is set to permissive, please try to adjust the permissions :
Files in /var/lib/mysql should be 660.
/var/lib/mysql directory should be 755, Any of its subdirectories should be 700.
if your SELinux is set to enforcing, Please apply the right context.
my current log_directory path is
**/opt/demo/PostgreSQL/9.4/data/pg_log**
I'm trying to change the log directory path to
**/logs/demo/**
The server won't start when i uncomment the log path and it starts only when its default.
The postgresql.conf file looks like
# ERROR REPORTING AND LOGGING
#------------------------------------------------------------------------------
# - Where to Log -
log_destination = 'stderr' # Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# This is used when logging to stderr:
logging_collector = on
# These are only used if logging_collector is on:
#log_directory = '/logs/etbos/demo/' #directorywherelogfiles are written
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,
# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
# This is only relevant when logging to eventlog (win32):
#event_source = 'PostgreSQL'
So this is what I supposed :) You need to grant permissions to the new log directory to postgres user.
You can do this using f.e.:
sudo chown postgres:postgres /your/new/log/dir/path
Answering your other question:
To allow TCP/IP connections from remote hosts you need to edit pg_hba.conf file.
You can allow ALL TCP/IP connections by adding a line like this:
host all all 0.0.0.0/32 md5
There are five parameters above, you can read about them in the pg_hba.conf file in the comments at the top of the file, but in short they mean:
[connection_type] [database_name] [user_name] [remote_ip/mask] [auth_type]