How to Setup an existing postgres DB in a new server? - database

I have an postgresql DB on an AWS Instance, for some reason the instance now is damaged and the only thing I can do is to detach the disk volume and attach it to a new instance.
The challenge I have now is, how do I setup the postgresql DB I had on the damaged instance volume into the new instance without losing any data.
I tried to attach the damaged instance volume as the main volume in the new instance but it doesnt boot up so what I did was that I mounted the volume as a secondary disk and now I can see the information in it including the "data" folder where postgres DB information its supposed to be, but I dont know what to do in order to enable the DB on this new instance.

The files in the /path/to/data/ directory are all that you need in order for a PostgreSQL instance to start up, given that the permissions are set to 0700 and owned by the user starting up the process (usually postgres, but sometimes others). The other things to bear in mind are:
Destination OS needs to be the same as where you got the data/ directory from (because filesystem variations may either corrupt your data or prevent Postgres from starting up)
Destination filesystem needs to be the same as where you got your data/ directory from (for reasons similar to above)
Postgres major version needs to be the same as where you got your data/ directory from (for reasons similar to above)
If these conditions are met, you should be able to bring up the database and use it.

Related

Transferring MDF/LDF files to target server

Background:
I have a medium-sized database (900GB) that needs to be copied onto another server (driven via code, not scheduled). Currently we take a backup (to .bak), move it to a staging server, and restore it to the target server. The target server does not have enough space to hold the backup file, and the restored instance simultaneously, thus the staging server. These transfers (backup to staging, restore from staging) happen over SMB2. The staging server needs to go away due to business requirements, however. It is worth mentioning the target server will be taken offline (and used offline) after the transfer, so I'm not sure the mirroring or replication options are valid.
I have identified two options -- one is to backup the database to the primary server, and open up firewall rules/smb to serve the backup file to the target server over SMB. ("RESTORE FROM \x.x.x.x\blah\db.bak"). Security isn't a fan, though.
The ideal solution (and one that could easily be implemented in every other database I've worked with), is to quiesce the database and transfer the datafiles (in the case of ms-sql, mdf and ldf files). However, upon research I see there is no such functionality available out of the box. I know I can take the database offline to copy the mdf/ldf safely, but that's not an acceptable solution (database must remain online).
I have read LOTS of posts and Microsoft documentation regarding VSS / shadow copy, but I have also read lots of conflicting information about the reliability of using VSS/sqlwriter to copy the mdf/ldf file to the target server, and simply re-attaching the database.
I am looking for documentation or advice (or even backup software that can be programmatically driven via an API) to accomplish this goal of transferring the database without requiring a secondary holding place. Currently I'm researching how to drive this copying process with Powershell, using VSS(vssadmin/vshadow from sdk), but I'm not confident in what I'm reading, and it's not even clear to me if VSS/sqlwriter is a supportable method to copying online LDF/MDF files. Any advice is appreciated.
Thanks,

Recover data of dropped tables without backup

I did a blunder in my client database. I have dropped all tables and created new tables with same name in client database. I lost all client data. I don’t have any backup of client DB. Can you please let me know if I can recover data of old tables.
Few options .All untested and i am not sure,how consistent the database will be
1.RedGate Provides a tool called SQL Log Rescue..It claims to do View and recover deleted and modified data
2.Volume Shadow Copy service
some reference of what this means :(emphasis mine)
This service allows Windows to take automatic or manual backups, or snapshots, of the current state of the files on a particular volume (drive letter). The important part of this process is that these backups can be taken of files even if they are open. Therefore, this provides a mechanism that backup programs and Windows can use to retain a reliable history of a computer's files
Below is a step by step tutorial on how to do this
https://www.bleepingcomputer.com/tutorials/how-to-recover-files-and-folders-using-shadow-volume-copies/

SQL Server 2014 Database NDF file Lost - Filegroup offline

I have a database that has lost one of its .ndf files and have been unable to get at the data. The .ndf file in question was added last Thursday and placed in a temporary storage location by a colleague (d'oh!). There is no backup available from this database since prior to this .ndf being created. I have seen numerous solutions to similar problems when the .ndf in question is its own filegroup, but in this case it actually is in a filegroup with an additional file which I want to try and get data from. I am pretty sure what I am trying to do is not possible but there is always a chance right?
The database setup
PRIMARY: Data.mdf -200mb
Data Filegroup 1: Data_1.ndf - 2.9gb
Data_2.ndf - 64gb (newly added file that is now lost - I believe it is just preallocated space)
LOG: Log.ldf - 128mb
When we logged into the VM this morning (hosted in Azure), we were presented with an unexpected shutdown notification from Windows (it seems there was a powerloss/shutdown at 1am) and our application was not reaching the database. Looking in SQL Server Management Studio I could see that it was Recovery Pending status. Trying to bring the db online lead me to an error about Data_2.ndf not being found (located at D:\SQL\Data\Data_2.ndf).
When I accessed the D drive (temporary storage drive) I was presented with a wonderful blank Windows Explorer window - completely blank drive.
I was able to set the Data_2.ndf file offline and bring the database itself online, however I am not able to query any of the data (as all tables were in Data Filegroup 1) due to the filegroup being offline. The other 3 files (mdf, ndf, ldf) are all online.
Is there any way out of this? Any way to perhaps recover any remaining data from Data_1.ndf or is it completely toast?
(This was a hastily stood up development server and there was no backup/recovery strategy for it, as "Azure never crashes" :)).
(Edit:formatting)
You are hosed. Its a miracle you can bring up your database. Are you sure you can retrieve data - have you tried doing selects? You probably will receive more extensive answers on the Database Administrators group.

Access Newly Created PostgreSQL Cluster

I am looking to create a new database cluster in PostgreSQL because I want this cluster to point to a different (and larger) data directory than my current cluster on localhost:5432. To create a new cluster, I ran the command below. However, after restarting PostgresSQL I don't see the db cluster in pgAdmin and can't connect to a server with port 5435. How do I connect to this new cluster? Alternatively, I thought I could create a new tablespace within the old cluster which points to this larger data directory, but I'm assuming that populating a database using this tablespace would still result in files being written to the cluster data directory? I'm running PostgreSQL 9.3 on Ubuntu 12.4.
$ pg_createcluster -d /home/foo 9.3 test_env
Creating new cluster 9.3/test_env ...
config /etc/postgresql/9.3/test_env
data /home/foo/
locale en_US.UTF-8
port 5435
You need to specify option '--start' if you want it to actually start the database. Without that it will just create the data directory.
However using table spaces would probably be a better solution since running multiple database clusters introduces the overhead of running completely separate postmaster processes. If you create a database in a table space then all files associated with the database will go in the associated directory. The system catalogue will still be in the cluster's data directory, but this is unlikely to be large enough to cause you any problems.

Copying a tablespace from one postgresql instance to another

I'm looking for a way to quickly "clone" a database from 1 postgresql server to another.
Assuming...
I have a postgresql server running on HostA, serving 2 databases
I have 2 devices mounted on HostA, each device stores the data for one of the database (i.e. 1 database => 1 tablespace => 1 device )
I am able to obtain a "safe" point in time snapshot through some careful method
I am able to consistently and safely produce a clone of any of the 2 devices used (I could even assume both database only ever receive reads)
The hosts involved are CentOS 5.4
Is it possible to host a second postgresql server on HostB, mount one of the cloned devices and get the corresponding database to "pop" into existence? (Kind of like when you copy the MyISAM table files in MySQL).
If this is at all possible, what is the mechanism (i.e. what DDL should I look into or pg commands)?
It's important for me to be able to move individual databases in isolation of each other. But if this isn't possible, would a similar approach work at a server level (trying to clone and respawn a server by copying the datadir over to a host with the same postgresql install) ?
Not easily, because there are a number of files that are shared between databases which means each database in the same install is dependent on this.
You can do it at the server level, or at the cluster level, but not at individual database level. Just be sure to copy/clone over the whole data directory and all external tablespaces. As long as you can make the clone atomically (either on the same filesystem or with a system that can do atomic clones across filesystems), you don't even need to stop the database on hostA to do it.

Resources