Detaching Additional Disk Error on GCP - Compute Engine - VM Instances - gcp-compute-instance

I had three additional disk on my VM Instances and I wanted to detach one disk so I edited VM instance and detach one disk.
After this, I check disk with lsblk but now it is not showing only one disk instead of two disk.
So I restarted the VM instance but I am not able to even ssh now.


How to Setup an existing postgres DB in a new server?

I have an postgresql DB on an AWS Instance, for some reason the instance now is damaged and the only thing I can do is to detach the disk volume and attach it to a new instance.
The challenge I have now is, how do I setup the postgresql DB I had on the damaged instance volume into the new instance without losing any data.
I tried to attach the damaged instance volume as the main volume in the new instance but it doesnt boot up so what I did was that I mounted the volume as a secondary disk and now I can see the information in it including the "data" folder where postgres DB information its supposed to be, but I dont know what to do in order to enable the DB on this new instance.
The files in the /path/to/data/ directory are all that you need in order for a PostgreSQL instance to start up, given that the permissions are set to 0700 and owned by the user starting up the process (usually postgres, but sometimes others). The other things to bear in mind are:
Destination OS needs to be the same as where you got the data/ directory from (because filesystem variations may either corrupt your data or prevent Postgres from starting up)
Destination filesystem needs to be the same as where you got your data/ directory from (for reasons similar to above)
Postgres major version needs to be the same as where you got your data/ directory from (for reasons similar to above)
If these conditions are met, you should be able to bring up the database and use it.

How to detach data disk in Azure SQL VM

For temporary testing purpose. I created a SQL VM in Azure, and the Azure wizard assign me a OS disk with 127G and a Data disk with 1T. But the cost of the Data disk is a little bit expensive for me. so I change the server default data and log path to OS(C) disk. and backup DB to OS(C) disk. then detach Data(F) disk.
The problem is sql server start fail without data disk. what should I do if I want run sql server without data(F) disk?
C drive is dedicated for OS. While putting data and log files on the same drive as OS may work for test/dev workloads, it is not recommended to put database files and OS on the same drive for your production workloads. Depending on your VM type, OS drive may use Standard Disks (HDD) or Premium Disks. There are IOPS limit for each of these Disks types. For instance, Standard Disks can support up to 500 IOPS and it is important to keep these IOPS for OS so that OS operations do not have to compete IOPS with other applications. Starving OS operations can result in VM restart.
Please send an email to if you have additional questions.
as far as I recall, you dont really need data disk to run sql on azure vm. by default it will use it to host database files, you can move those to disk C and repoint SQL to them. there are many ways to do that, you can consult official docs.

Multiple File groups on a Virtual machine for SQL Server

In many of the SQL Server articles it is mentioned that the best practice is to use multiple File group on a physical disk to avoid disk contention and disk spindle issues .So my query are :
1:Does the same theory of having multiple file group hold true for
a virtual machine ?
2:Should i still create my temp db to a
different disk and should i also create multiple temp db files to
avoid large read/write operation on the same temp db file in a
virtual machine setup for my production environment
You recommendation and reasoning would be helpful to decide the best practice.
Yes, it still applies to virtual servers. Part of the contention problem is accessing the Global Allocation Map (GAM) or Shared Global Allocation Map (SGAM), which exists for each database file and can only be accessed by one process or thread at a time. This is the "latch wait" problem.
If your second disk is actually on different spindles, then yes. If the database files would be on different logical disks but identical spindles, then it's not really important.
The MS recommendation is that you should create one database file for each logical processor on your server, up to 8. You should test to see if you find problems with latch contention on tempdb before adding more than 8 database files.
You do not need to (and generally should not) create multiple tempdb log files because those are used sequentially. You're always writing to the next page in the sequence, so there's no way to split up disk contention.
The question needs a bit more information about your environment. If the drives for the VM are hosted on a SAN somewhere and the drives presented to the VM are all spread across the same physical disks on the SAN then you're not going to avoid contention. If, however, the drives are not on the same physical drives then you may see an improvement. Your SAN team will have to advise you on that.
That being said, I still prefer having the log files split from the data file and tempDB being on it's own drive. The reason being that if a query doesn't go as planned then it can fill the log file drive which may take that database offline, but other databases may still be able to keep running (assuming they have enough empty space in their log files).
Again with tempDB, if that does get filled then the transaction will error out, and everything else should keep running without intervention.

Backup PostgreSQL database hosted on AWS EC2 without shutting down or restarting the master

I'm using PostgreSQL v9.1 for my organization. The database is hosted in Amazon Web Services (EC2 instance) below a Django web-framework which performs tasks on the database (read/write data). The problem is, to backup this database in a periodic fashion in a specified format (see Requirements).
A standby server is available for backup purposes.
The master-db is to be backed up every hour. Once the hour is ticked, the db is quickly backed up in entirety and then copied to slave in a file-system archive.
Along with hourly backups, I need to perform a daily backup of the database at midnight and a weekly backup on midnight of every Sunday.
Weekly-backups will be the final backups of the db. All weekly-backups will be saved. Daily-backups of the last week will only be saved and Hourly-backups of the last day will only be saved.
But I have the following constraints too.
Live data comes into the server every day (rate of insertion is per 2 seconds).
The database now hosting critical customer data which implies that it cannot be turned off.
Usually, data stops coming into the db during nights, but there's a good chance that data might be coming into master-db during some nights for which I have no control over to stop the insertions (Customer-data will be lost)
If I use traditional backup mechanisms/software (example, barman), I've to configuring archiving mode in postgresql.conf and authenticate users in pg_hba.conf which implies I need a server-restart to turn it on which again, stops the incoming data for some minutes. This is not permitted (see above constraint).
Is there a clever way to backup the master-db for my needs? Is there a tool which can automate this job for me?
This is a very crucial requirement as data has begun to appear into the master-db since few days and I need to make sure there's replication of master-db on some standby-server all the time.
Use EBS snapshots
If, and only if, your entire database including pg_xlog, data, pg_clog, etc is on a single EBS volume, you can use EBS snapshots to do what you describe because they are (or claim to be) atomic. You can't do this if you stripe across multiple EBS volumes.
The general idea is:
Take an EBS snapshot using the EBS APIs using command line AWS tools or a scripting interface like the wonderful boto Python library.
Once the snapshot completes, use AWS API commands to create a volume from it and attach the volume your instance, or preferably to a separate instance, and then mount it.
On the EBS snapshot you will find a read-only copy of your database from the point in time you took the snapshot, as if your server crashed at that moment. PostgreSQL is crashsafe, so that's fine (unless you did something really stupid like set fsync=off in postgresql.conf). Copy the entire database structure to your final backup, e.g archive it to S3 or whatever.
Unmount, unlink, and destroy the volume containing the snapshot.
This is a terribly inefficient way to do what you want, but it will work.
It is vitally important that you regularly test your backups by restoring them to a temporary server and making sure they're accessible and contain the expected information. Automate this, then check manually anyway.
Can't use EBS snapshots?
If your volume is mapped via LVM, you can do the same thing at the LVM level in your Linux system. This works for the lvm-on-md-on-striped-ebs configuration. You use lvm snapshots instead of EBS, and can only do it on the main machine, but it's otherwise the same.
You can only do this if your entire DB is on one file system.
No LVM, can't use EBS?
You're going to have to restart the database. You do not need to restart it to change pg_hba.conf, a simple reload (pg_ctl reload, or SIGHUP the postmaster) is sufficient, but you do indeed have to restart to change the archive mode.
This is one of the many reasons why backups are not an optional extra, they're part of the setup you should be doing before you go live.
If you don't change the archive mode, you can't use PITR, pg_basebackup, WAL archiving, pgbarman, etc. You can use database dumps, and only database dumps.
So you've got to find a time to restart. Sorry. If your client applications aren't entirely stupid (i.e. they can handle waiting on a blocked tcp/ip connection), here's how I'd try to do it after doing lots of testing on a replica of my production setup:
Set up a PgBouncer instance
Start directing new connections to the PgBouncer instead of the main server
Once all connections are via pgbouncer, change postgresql.conf to set the desired archive mode. Make any other desired restart-only changes at the same time, see the configuration documentation for restart-only parameters.
Wait until there are no active connections
SIGSTOP pgbouncer, so it doesn't respond to new connection attempts
Check again and make sure nobody made a connection in the interim. If they did, SIGCONT pgbouncer, wait for it to finish, and repeat.
Restart PostgreSQL
Make sure I can connect manually with psql
SIGCONT pgbouncer
I'd rather explicitly set pgbouncer to a "hold all connections" mode, but I'm not sure it has one, and don't have time to look into it right now. I'm not at all certain that SIGSTOPing pgbouncer will achieve the desired effect, either; you must experiment on a replica of your production setup to ensure that this is the case.
Once you've restarted
Use WAL archiving and PITR, plus periodic pg_dump backups for extra assurance.
... and of course, the backup chapter of the user manual, which explains your options in detail. Pay particular attention to the "SQL Dump" and "Continuous Archiving and Point-in-Time Recovery (PITR)" chapters.
PgBarman automates PITR option for you, including scheduling, and supports hooks for storing WAL and base backups in S3 instead of local storage. Alternately, WAL-E is a bit less automated, but is pre-integrated into S3. You can implement your retention policies with S3, or via barman.
(Remember that you can use retention policies in S3 to shove old backups into Glacier, too).
Reducing future pain
Outages happen.
Outages of single-machine setups on something as unreliable as Amazon EC2 happen a lot.
You must get failover and replication in place. This means that you must restart the server. If you do not do this, you will eventually have a major outage, and it will happen at the worst possible time. Get your HA setup sorted out now, not later, it's only going to get harder.
You should also ensure that your client applications can buffer writes without losing them. Relying on a remote database on an Internet host to be available all the time is stupid, and again, it will bite you unless you fix it.

How to properly handle disaster recovery to Azure cloud with vm, disk, and local file copy

I have a question, I hope you might be able to help me with. Following is a scenario that I would like to create using Azure. Could you let me know if this is feasible, and how I would go about doing this?
•Create a virtual machine--which I have done.
•Add an empty disk, format, create volume, etc.
•Now, I want to be able to have an area on this disk to which I will copy data from our local network. This would be done as a backup in the event of local infrastructure failure.
The idea behind this is to have a virtual machine that always has the latest copy of important local-premise files, along with installed applications allowing our users to remote into this vm in the event local services are disrupted.
I have the virtual machine, the storage container, the empty disk mounted, formatted, and available on my vm, but what do I do now to have an area (or the entire drive) made available for local on-premise file copy? I am evaluating CloudBerryLab's backup application, but when I use it I can only seem to send my file copies to a storage area that is not a disk drive--hence, not attached to the virtual machine.
So, am I not understanding how to handle this scenario properly? What tools should be used to make this happen, or is there a better architecture in Azure to handle this?
Thank You.
If you have a current version of Windows Server on-prem, you can just sign up for Azure Backup. Instead of keeping a VM running and paying for compute hours plus the storage, wait and spin it up only if you need it. The restore from Azure backup is amazingly quick to a VM.
Once you have a Windows Azure account, you will be able to see the Backup option. Set that up and then you can download the agent onto your Windows Server and set up the backup. SSL Cert is required.