Reproduce Production environment in a Rake Task in Rails - database

I'm trying to copy all my production database (that I have in Mongo) to my staging environment. So I'm trying to build a task. First I need to connect to production environment to be able to access to all my models in production (Model.all.each...) But I don't know how to reproduce the production environment. I know in console I can do 'export RAILS_ENV=heroku_production' but I don't know how to do it inside a Rake Task. This is what I'm trying for now but it does not work because I print Rails.env and it prints "development"...so I'm a bit lost
namespace :db do
namespace :sync_production_staging do
desc "Copy production database to staging"
task :staging => :environment do
system "export RAILS_ENV=heroku_production"
ap Rails.env
ap User.all
end
end
end

I have a script that copies my database from heroku to my local its a really strait forward process, I am sorry that this is PG and not mongo but I am sure that this should help
#lib/tasks/db.rake
namespace :db do
desc "Import most recent database dump"
task :import_from_prod => :environment do
puts 'heroku run pg:backups capture --app sushi-prod'
restore_backup 'sushi-prod'
end
def path_to_heroku
['/usr/local/heroku/bin/heroku', '/usr/local/bin/heroku'].detect {|path| File.exists?(path)}
end
def heroku(command, site)
`GEM_HOME='' BUNDLE_GEMFILE='' GEM_PATH='' RUBYOPT='' #{path_to_heroku} #{command} -a #{site}`
end
def restore_backup(site = 'sushi-prod')
dump_file = "#{Rails.root}/tmp/postgres.dump"
unless File.exists?(dump_file)
pgbackups_url = heroku('pg:backups public-url -q', site).chomp
puts "curl -o #{dump_file} #{pgbackups_url}"
system "curl -o #{dump_file} '#{pgbackups_url}'"
end
database_config = YAML.load(File.open("#{Rails.root}/config/database.yml")).with_indifferent_access
dev_db = database_config[Rails.env]
system "pg_restore -d #{dev_db[:database]} -c #{dump_file}".gsub(/\s+/,' ')
puts
puts "'rm #{dump_file}' to redownload postgres dump."
puts "Done!"
end
end

Related

Nomad task getting killed

I have two tasks in task group
1) a db task to bring up a db and
2) the app that needs the db to be up.
Both start in parallel and the db tasks takes a lil bit time but by then the app recognizes that db is not up and kills the db task. Any solutions? Please advise.
It's somewhat common to have an entrypoint script that checks if the db is healthy. Here's a script i've used before:
#!/bin/sh
set -e
cmd="$*"
postgres_ready() {
if test -z "${NO_DB}"
then
PGPASSWORD="${RDS_PASSWORD}" psql -h "${RDS_HOSTNAME}" -U "${RDS_USERNAME}" -d "${RDS_DB_NAME}" -c '\l'
return $?
else
echo "NO_DB Postgres will pretend to be up"
return 0
fi
}
until postgres_ready
do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - continuing..."
exec "${cmd}"
You could save it as entrypoint.sh and run it with your application start script as the argument. eg: entrypoint.sh python main.py

Greenplum: Purging database Logs

is there any direct utility available to purge older logs from GP database, If i do it manually it is taking lot of time as there are 100+ segments, i have to go to each server and delete the logs files manually.
Other details: GP version - 4.3.X.X(Software Only Solution)
Cluster Config- 2+10
Thanks
I suggest you create a cron job and use gpssh to do this. For example:
gpssh -f ~/host_list -e 'for i in $(find /data/primary/gpseg*/pg_log/ -name "*.csv" -ctime +60); do rm $i; done'
This will remove files in pg_log on all segments that are over 2 months old. Of course, you should test this and make sure the path to pg_log is correct.

How to set the server variable on the fly using capistrano 3

we're trying to make our deployment scripts as generic as possible. Is it possible to have capistrano 3 prompt for the server address rather than setting it in the config files.
So far i have a capistrao task that does
namespace :config do
task :setup do
ask(:db_user, 'db_user')
ask(:db_pass, 'db_pass')
ask(:db_name, 'db_name')
ask(:db_host, 'db_host')
ask(:application, 'application')
ask(:web_server, 'server')
setup_config = <<-EOF
#{fetch(:rails_env)}:
adapter: postgresql
database: #{fetch(:db_name)}
username: #{fetch(:db_user)}
password: #{fetch(:db_pass)}
host: #{fetch(:db_host)}
EOF
on roles(:app) do
execute "mkdir -p #{shared_path}/config"
upload! StringIO.new(setup_config), "# {shared_path}/config/database.yml"
end
end
end
and in my production.rb file i have
set :application, "#{fetch(:application)}"
set :server_name, "#{fetch(:application)}.#{fetch(:server)}"
set :app_port, "80"
But when I do cap production config:setup to run the config script I get an error asking me for a password. If i hard code the server address in the production.rb file it works fine...how can I resolve this?
Thanks
I hope that someone else offers a more elegant solution, but if not:
I've done this in some cases with environment variables. If you want, you can also use a Makefile to simplify some of the env combinations.

trouble accessing mssql db using perl and freetds on unix

I'm trying to connect to a MS-SQL Database using perl and freetds. I have tested the installation of freetds using the unix commandline
`/usr/local/exec/bin/tsql -S myDB -I freetds.conf -U userName -P passw0rd -D DataBase1 -o q < query.sql`
where query.sql contains my sql query. It runs perfectly well. But when I try the same with perl it gives me the following error -
`Your sybase home directory is /opt/sybase. Check the environment variable SYBASE if it is not the one you want! Cannot access file /opt/sybase/config/objectid.dat`
but running
$ set | grep SYBASE yields SYBASE=/usr/fsf/freetds
Below is my perl code:
#!/usr/bin/perl5/core/5.8.8/exec/bin/perl
use lib qw(/usr/perl5/core/5.8.8/exec/lib);
use lib qw(/usr/perl5/DBI/1.607/exec/5.8/lib/perl5);
use lib qw(/usr/perl5/DBD-Sybase/1.09/exec/5.8/lib/perl5);
use DBI;
use DBD::Sybase;
my $user = "userName";
my $passwd = "passw0rd";
my $server = "myDB";
`export SYBASE=/usr/fsf/freetds`;
`export LD_LIBRARY_PATH=/usr/fsf/freetds/0.82/exec/lib`;
`export FREETDSCONF=./freetds.conf`;
my $dbh = DBI->connect("DBD:Sybase:server=$server", $user, $passwd, {PrintError => 0});
unless ($dbh) {
die "ERROR: Failed to connect to server ($server).\nERROR MESSAGE: $DBI::errstr";
}
else {
print "\n";
print "Successful Connection.";
}
Any help much appreciated!
The path to your drivers says 5.10. You might have downloaded the drivers for the wrong version of perl. Either update to 5.10.1 or get the drivers for 5.8.8.
I have figured it out. Well, you need to first set the SYBASE environment variable before you install DBD-Sybase. And that's the reason behind Your sybase home directory is /opt/sybase when it is supposed to point to the freetds installation location. Ref: http://www.idevelopment.info/data/SQLServer/DBA_tips/Programming/PROG_4.shtml#Install%20DBD-Sybase

How to copy a file to a bunch of servers with capistrano

I use cap invoke a lot to run commands on a bunch of servers. I would like to also use capistrano to push a single file to a bunch of servers.
At first I thought that PUT would do it, but put makes you create the data for the file. I don't want to do this, I just want to copy an existing file from the machine where I'm running the capistrano comand to the other machines.
It would be cool if I could do something like this:
host1$ cap HOSTS=f1.foo.com,f2.foo.com,f3.foo.com COPY /tmp/bar.bin
I would expect this to copy host1:/tmp/bar.bin to f1.foo.com:/tmp/bar.bin and f2.foo.com:/tmp/bar.bin and f3.foo.com:/tmp/bar.bin
This kind of thing seems very useful so I'm sure there must be a way to do this...
upload(from, to, options={}, &block)
The upload action stores the file at the given path on all servers targeted by the current task.
If you ever used the deploy:upload task before, then you might already know how this method works. It takes the path of the resource you want to upload and the target path on the remote servers.
desc "Uploads CHANGELOG.txt to all remote servers."
task :upload_changelog do
upload("#{RAILS_ROOT}/CHANGELOG.txt", "#{current_path}/public/CHANGELOG")
end
source
This uploads all files to the respective servers.
cap deploy:upload FILES=abc,def
Show all tasks:
cap -T
cap deploy # Deploys your project.
cap deploy:check # Test deployment dependencies.
cap deploy:cleanup # Clean up old releases.
cap deploy:cold # Deploys and starts a `cold'...
cap deploy:create_symlink # Updates the symlink to the ...
cap deploy:migrations # Deploy and run pending migr...
cap deploy:pending # Displays the commits since ...
cap deploy:pending:diff # Displays the `diff' since y...
cap deploy:rollback # Rolls back to a previous ve...
cap deploy:rollback:code # Rolls back to the previousl...
cap deploy:setup # Prepares one or more server...
cap deploy:symlink # Deprecated API.
cap deploy:update # Copies your project and upd...
cap deploy:update_code # Copies your project to the ...
cap deploy:upload # Copy files to the currently...
cap deploy:web:disable # Present a maintenance page ...
cap deploy:web:enable # Makes the application web-a...
cap integration # Set the target stage to `in...
cap invoke # Invoke a single command on ...
cap multistage:prepare # Stub out the staging config...
cap production # Set the target stage to `pr...
cap shell # Begin an interactive Capist...
You could use:
cap deploy:upload
See:
https://github.com/capistrano/capistrano/wiki/Capistrano-Tasks#deployupload
Anyone coming here who doesn't have cap deploy:upload can try using cap invoke to pull the file instead of pushing it. For example:
cap invoke COMMAND='scp host.where.file.is:/path/to/file/there /target/path/on/remote`

Resources