I need to implement an automatic trasfer of daily backups from one DB to another DB. Both DB's and apps are hosted on heroku.
I know this is possible if to do it manually from local machine with the command:
heroku pgbackups:restore DATABASE `heroku pgbackups:url --app production-app` --app staging-app
But this process should be automated and run not from local machine.
I have an idea to write a rake which will execute this command; and run this rake daily with the help of Heroku Scheduler add-on.
Any ideas how it is better to do this? Or maybe there is a better way for this task?
Thanks in advance.
I managed to solve the issue myself. It appeared to be not so complex. Here is the solution, maybe it'll be useful to somebody else:
1. I wrote a script which copies the latest dump from a certain server to the DB of the current server
namespace :backup do
desc "copy the latest dump from a certain server to the DB of the current server"
task restore_last_write_dump: :environment do
last_dump_url = %x(heroku pgbackups:url --app [source_app_name])
system("heroku pgbackups:restore [DB_to_target_app] '#{last_dump_url}' -a [target_app_name] --confirm [target_app_name]")
puts "Restored dump: #{last_dump_url}"
end
end
To avoid authenication upon each request to the servers, craete a file .netrc in the app root (see details here https://devcenter.heroku.com/articles/authentication#usage-examples)
Setup Scheduler add-on for heroku and add our rake task along with the frequency of its running.
That is all.
Related
I have a mssql pod that I need to use the sql_exporter to export its metrics.
I was able to set up this whole thing manually fine:
download the binary
install the package
run ./sql_exporter on the pod to start listening on port for metrics
I tried to automate this using kubectl exec -it ... and was able to do step 1 and 2. When I try to do step 3 with kubectl exec -it "$mssql_pod_name" -- bash -c ./sql_exporter the script just hangs and I understand as the server is just going to be listening forever, but this stops the rest of my installation scripts.
I0722 21:26:54.299112 435 main.go:52] Starting SQL exporter (version=0.5, branch=master, revision=fc5ed07ee38c5b90bab285392c43edfe32d271c5) (go=go1.11.3, user=root#f24ba5099571, date=20190114-09:24:06)
I0722 21:26:54.299534 435 config.go:18] Loading configuration from sql_exporter.yml
I0722 21:26:54.300102 435 config.go:131] Loaded collector "mssql_standard" from mssql_standard.collector.yml
I0722 21:26:54.300207 435 main.go:67] Listening on :9399
<nothing else, never ends>
Any tips on just silencing this and let it run in the background (I cannot ctrl-c as that will stop the port-listening). Or is there a better way to automate plugin install upon pod deployment? Thank you
To answer your question:
This answer should help you. You should (!?) be able to use ./sql_exporter & to run the process in the background (when not using --stdin --tty). If that doesn't work, you can try nohup as described by the same answer.
To recommend a better approach:
Using kubectl exec is not a good way to program a Kubernetes cluster.
kubectl exec is best used for debugging rather than deploying solutions to a cluster.
I assume someone has created a Kubernetes Deployment (or similar) for Microsoft SQL Server. You now want to complement that Deployment with the exporter.
You have options:
Augment the existing Deployment and add the sql_exporter as a sidecar (another container) in the Pod that includes the Microsoft SQL Server container. The exporter accesses the SQL Server via localhost. This is a common pattern when deploying functionality that complements an application (e.g. logging, monitoring)
Create a new Deployment (or similar) for the sql_exporter and run it as a standalone Service. Configure it scrape one|more Microsoft SQL Server instances.
Both these approaches:
take more work but they're "more Kubernetes" solutions and provide better repeatability|auditability etc.
require that you create a container for sql_exporter (although I assume the exporter's authors already provide this).
I am trying to migrate a series of Trac projects originally hosted on CloudForge onto a new Bitnami virtual machine (debian with Trac stack installed).
The documentation on the Trac wiki regarding restoring from a backup is a little vague for me but suggests that I should be able to setup a new project
$ sudo trac-admin PROJECT_PATH initenv
stop the services from running
$ sudo /opt/bitnami/ctlscript.sh stop
copy the snapshot from the backup into the new project path and restart the services
$ sudo /opt/bitnami/ctlscript.sh start
and should be good to go.
Having done this (and worked through quite a few issues on the way) I have now got to the point where the browser page shows
Trac Error
TracError: Unable to check for upgrade of trac.db.api.DatabaseManager: TimeoutError: Unable to get database connection within 0 seconds. (OperationalError: unable to open database file)
When I setup the new project I note that I left the default (unedited) database string but I have no idea what database type was used for the original CloudForge Trac project i.e. is there an additional step to restore the database.
Any help would be greatly appreciated, thanks.
Edit
Just to add, the CloudForge was using Trac 0.12.5, new VM uses Trac 1.5.1. Not sure if this will be an issue?
Edit
More investigation and I'm now pretty sure that the CloudForge snapshot is not an SQLite (or other) database file - it looks like maybe a query type response as it starts and ends with;
BEGIN TRANSACTION;
...
COMMIT;
Thanks to anyone taking the time to read this but I think I'm sorted now.
After learning more about SQLite i discovered that the file sent by CloudForge was an sqlite DUMP of the database and was easy enough to migrate to a new database instance using the command line
$ sqlite3 location_of/new_database.db < dump_file.db
I think I also needed another prior step of removing the contents of the original new_database.db using the sqlite3 command line (just type sqlite3 in terminal)
$ .open location_of/new_database.db
$ BEGIN TRANSACTION;
$ DELETE FROM each_table_in_database;
$ COMMIT;
$ .exit
I then had some issue with credentials on the bitnami VM so needed to retrieve these (as per the bitnami documentation) using
$ sudo cat /home/bitnami/bitnami_credentials
and add this USER_NAME as a TRAC_ADMIN using
$ trac-admin path/to/project/ permission add USER_NAME TRAC_ADMIN
NOTE that pre and post this operation be sure to stop and re-start the bitnami services using
$ sudo /opt/bitnami/ctlscript.sh stop
$ sudo /opt/bitnami/ctlscript.sh start
I am the guy from Trac Users, you need to understand that the user isnt really stored in the db. You got some tables with columns holding the username but there is no table for an user. Looking at you post i think your setup used htdigest and then your user infos are in that credential file. if you cat it you should see something like
username:realmname:pwhash
i thing this is md5 as hash but it doesnt really matter for your prob. so if you want to make a new useryou have to use
htdigest [ -c ] passwdfile realm username
then you should use trac-admin to give the permission and at that point your user should be able to login.
Cheers
MArkus
I have a Rails 5 app deployed with Google App Engine using Cloud SQL for MySQL following their tutorial.
When I run a database migration,
bundle exec rake appengine:exec -- bundle exec rake db:migrate
I get a deprecation warning:
WARNING: This command is deprecated and will be removed on or after 2018-10-31. Please use `gcloud builds submit` instead.
Before I go off on a vision quest to sort this out, has anyone else converted their Rails app to use gcloud builds for rake tasks like this? Mind sharing the gist? Thanks!
Go to the Cloud SQL Instances page in the Google Cloud Platform Console. ...
Select the instance you want to add the database to.
Select the Databases tab.
Click Create database.
In the Create a database dialog, specify the name of the database, and optionally the character set and collation. ...
Click Create.
If this isn't what your looking for then try to start over
I ended up finding this answer which goes through installing cloud sql proxy so you can run the migration locally:
RAILS_ENV=production bin/rails db:migrate
I'm still interested in a new way to easily execute the command in the cloud, but running locally with a db proxy totally works for now.
So here is my scenario:
Today my server was restarted by our hoster (acpi shutdown).
My mongo database is a simple docker container (mongo:3.2.18)
Because of an unknown reason the container wasn't restarted on reboot (restart: always was set in docker-compose).
I started it and noticed the volume mapping were gone.
I restored them to the old paths, restarted the mongo container and it started without errors.
I connected to the database and it was completely empty.
> show dbs
local 0.000GB
> use wekan
switched to db wekan
> show collections
> db.users.find();
>
Also I already tried db.repairDatabase();, no effect.
Now my _data directory contains a lot of *.wt files and more. (File list)
I found collection-0-2713973085537274806.wt which has a file size about 390MiB.
This could be the data I need to restore, assuming its size.
Any way of restoring this data?
I already tried my luck using wt salvage according to this article, but I can't get it running - still trying.
I know backups,backups,backups! Sadly this database wasn't backuped.
Related GitHub issue, contains details to software.
Update:
I was able to create a .dump file with the WiredTiger Data Engine tool. However I can't get it imported into a mongoDB.
Try running a repair on the mongo db container. It should repair your database and the data should be completely restored.
Start mongo container in bash mode.
sudo docker-compose -f docker-compose.yml run mongo bash
or
docker run -it mongo bash
Once you are inside the docker container, run mongo db repair.
mongod --dbpath /data/db --repair
The DB should repaired successfully and all your data should be restored.
First of, I want to say, that I am not a DB expert and I have no experience with the heroku service.
I want to deploy a play framework application to the heroku service. And I need a database to do so. So I created a postgresql database with this command, since it's supported by heroku:
Users-MacBook-Air:~ user$ heroku addons:create heroku-postgresql -a name_of_app
And I got this as response
Creating heroku-postgresql on ⬢ benchmarkingsoccerclubs... free
Database has been created and is available
! This database is empty. If upgrading, you can transfer
! data from another database with pg:copy
So the DB is now existing but empty of course. For development I worked with a local H2 Database.
Now I would want to populate the DB on heroku using a sql file, since it's quite a lot of data. But I couldn't find how to do that. Is there a command for the heroku CLI, where I can hand over the sql file as an argument and it populates the database? The File basically consists of a few tables which get created and around 10000 Insert commands.
EDIT: I also have CSV files from all the tables. So if there is a way how I can populate the Postgres DB with those would be also great
First, run the following to get your database's name
heroku pg:info --app <name_of_app>
In the output, note the value of "Add-on", which should look something like this:
Add-on: postgresql-angular-12345
Then, issue the following command:
heroku pg:psql <Add-on> --app <name_of_app> < my_sql_file.sql
For example (assuming your sql commands are in file test.sql):
heroku pg:psql postgresql-angular-12345 --app my_cool_app < test.sql