Databases Failed to load in cloudant data layer local edition 1.1 - cloudant

We have lost all databases from cloudant and as there is no option we restored data folders from file system backup and try to restart cloudant.
Facing the below issue and errors.
Please help us in resolving the issue.Databases failed to load
Getting this error as below:
{"error":"internal_server_error","reason":"No DB shards could be opened.","ref":1747618916}

The issue has been resolved after restoring the shards directory as there are some files missing from the restored backup and then restart cloudant

Related

Log shipping, logs are copied to the second server folder but not uploaded to the database

I am doing log shipping, but the second server is not uploading the logs to the secondary database.
I try to get the error but it doesn't reflect any. The jobs run without problems.
Any possible solution? Use official Microsoft documentation
I would appreciate it.

Cannot Restore DB to RDS Instance

I am getting below error when trying to restore DB from S3 Bak file -
100 percent processed.
[2018-11-12 19:32:21.677] Processed 32980832 pages for
database 'RDSBackup', file 'SourceDB_NEW' on file 1.
[2018-11-12 19:32:21.680]
Processed 4547 pages for database
'RDSBackup', file 'SourceDB_NEW_log' on file 1.
[2018-11-12 19:32:22.013]
RDSBackup_FULL_COPY_ONLY_20181112_010009.bak:
S3 processing has been aborted.
Error started after the source DB version changed from SQL2012 Enterprise to SQL2016 Standard.
I escalated this issue to AWS. As per the latest update, they have made some changes to RDS and I could restore the DB even with a large log file (300GB).
Link for the newest update details is below -
https://aws.amazon.com/blogs/database/rds-sql-server-has-two-new-exciting-backup-and-restore-enhancements/
Alternate solution - At first, I was asked to try to restore with a smaller log file. I shrunk the logs and the restore worked.Although AWS engineers asked me to keep a check on the logfile size, they made some changes to RDS so as to accommodate S3 bak files with relatively large size as well.

Get local copy of SQL Server hosted on Amazon RDS

I have a small (few hundred MB) SQL Server database running on RDS. I've spent several hours trying to get a copy of it onto my local SQL Server 2014 instance. All of the following fail. Any ideas what might work?
Task -> Backup fails because it doesn't give my admin account permission to backup to a local drive.
Copy Database fails during create package with While trying to find a folder on SQL an OLE DB error was encountered with error code 0x80040E4D
From SSMS, while connected to the RDS server, running BACKUP DATABASE. This fails with message BACKUP DATABASE permission denied in database 'MyDB'. Even after running EXEC sp_addrolemember 'db_backupoperator' for the connected user.
General scripts generates a 700MB .sql file. Running that with sqlcmd -i fails at some point after producing plausible .mdf and .ldf files that can't be mounted on the local server (probably because the sqlcmd failed to complete and unlock them).
AWS has finally provided a reasonably easy means of doing this: It requires an S3 bucket.
After creating a bucket called rds-bak I ran the following stored procedure in the RDS instance:
exec msdb.dbo.rds_backup_database
#source_db_name='MyDatabase',
#s3_arn_to_backup_to='arn:aws:s3:::rds-bak/MyDatabase.bak',
#overwrite_S3_backup_file=1;
The following stored procedure returns the status of the backup request:
exec msdb.dbo.rds_task_status #db_name='MyDatabase'
Once it finished I downloaded the .bak file from S3 and imported it into a local SQL Server instance using the SSMS Restore Database... wizard!
The SSIS Import Export Wizard can generate a package to duplicate a whole set of tables. (It's not the sort of Copy Database function that relies on files - it makes a package with data flow components for each table.)
It's somewhat brittle but can be made to work :-)
SSMS Generate Scripts feature can often fail with any large data set as the script for all the data is just to large/verbose. This method never scripts out the data.
Check this out: https://github.com/andrebonna/RDSDump
It is a C#.NET Console Application that search for the latest origin database Snapshot, restore it on a temporary RDS instance, generate a BACPAC file, upload it to S3 and delete the temporary RDS instance.
You can transform your RDS snapshot into a BACPAC file, that can be downloaded and imported onto your local SQL Server 2014 instance using the feature answered here (Azure SQL Database Bacpac Local Restore)
Redgate's SQL Compare and SQL Data Compare are invaluable for these types of things. They are not cheap (but worth every penny imo). But if this is a one-time thing, you could use the 14 day trial and see how it behaves for you.
http://www.red-gate.com/products/

SQL Server Database Primary Data File got lost

SQL Server 2008 R2 stopped all of a sudden due to (maybe) Power Fluctuation.
I tried all the possible ways to restart the it but every time it is failing with the error
The request failed or the service did not respond in a timely fashion.
Some of the ways I tried are
Making the SQL Server to log On as "Local System" instead of "NetworkService"
Replacing of Master.mdf and mastlog.ldf files from the "Bin/Templates" folder
Disabling "VIA" (which was already disabled)
But all in-vain :(
On checking further I noticed that both the data files i.e. mydb.mdf and mydb.ldf of my database are not there in the DATA folder and instead there are mydb_1.ndb and mybd_2.ldf files.
How to recover mydb.mdf file and to restart the SQL Server?
Thank you.
sql data files can be named anything so the mydb_1.ndb could be your data file.
If that's true you should be able to recover the data by:
Install a new sql server (sql express would work if the DB is < 10GB)
move the mydb_1.ndb and mybd_2.ldf onto that server
Use "Attach..." from ssms to add the database to the new server
If you are lucky and that ndb is just a differently named mdf file you should be able to access the data.
Then you can repair your existing server (reinstall will be easier than messing with the master database unless you've got other dbs on there) and move the database back over i.e. do the same attach... method
Oh - and start backing it up :)

Cannot Restore from Backup File to Sybase

We have a very old sybase server. Our database in it is acting up. We need to restore the backup database file from our backup sybase server. But when I try that, I keep getting this error message:
Msg 7205, Level 17, State 2: Line 1: Can't open a connection to site
'SYB_BACKUP'. See the error log file in the SQL Server boot
directory.
That is how I restore the database backup:
1. Use RCP to copy the dump file from the spare server to the primary server. And name the copy "frombkup_mydb.dump".
2. Drop the old database from the primary server, and re-create an empty one.
3. Then use the following command to load the database from the backup dump file:
load database mydb from "/export/home/syb11.dump/frombkup_mydb.dump"
Unfortunately I don't know where the error log file is. I am not familiar with SCO Unix and Sybase.
Does anyone know why the restore doesn't work?
Please help. Thanks.
Jay Chan
It's likely that your Backup Server is not running.
The SAP/Sybase ASE database process requires the backup server to be running for database backups or restores.
To find which database processes are running you can use the showserver command usually located in:
$SYBASE/$SYBASE_ASE/install/showserver
If the backup server is not running (likely), then in the ./install/ directory, look for the file named RUN_SYB_BACKUP
You can start the server by issuing the command (from the ./install/ directory)
startserver -f RUN_SYB_BACKUP
This should start the backup server, and allow you to restore the database.

Resources