Symfony app > content of deploy:migrate task not executed - continuous-deployment

Can't execute the content ef deploy:migrate task.
cap staging deploy:migrate doesn't execute doctrine:schema:update symfony command. Instead, the response from task execution is empty.
lock "~> 3.11.0"
set :application, "some app"
set :repo_url, "...*.git"
set :stages, ["staging", "production"]
set :default_stage, "staging"
set :symfony_console_path, "bin/console"
set :linked_files, ["app/config/parameters.yml", "app/config/parameters_staging.yml"]
set :linked_dirs, ["app/logs", "vendor", "web/vendor", "web/assets", "web/uploads/user_data"]
set :model_manager, "doctrine"
set :format, :pretty
set :log_level, :debug
set :keep_releases, 3
after 'deploy:updated', 'symfony:assets:install'
after 'deploy:updated', 'deploy:migrate'
namespace :deploy do
task :migrate do
on roles(:db) do
invoke 'symfony:console', 'doctrine:schema:update', '--force', '--env=staging'
end
end
end

No need to put additional parameter '--env=staging'.

Related

How to skip the SSIS package failure even if the task fails

An ssis package as shown below;It has to be designed in such a way that if the email tasks fails, the package should not fail.
The 2nd precedence constraints is used as completed,so it would skip the start email,in case it fails and package would not fail
The question is how to implement it on the last task,End email.In case this task fail then it should not fail the whole package
Use a dummy task, that you connect the End Email task with a completion constraint. An empty Sequence Container that you could name "dummy container" or "Endpoint" should do the work.
Just go to End Email properties and change the following properties:
ForceExecutionValue --> True
ForceExecutionResult --> Success
FailPackageOnFailure --> False
FailParentOnFailure --> False

SolrCtl instance using JAAS option

I am using Cloudera cluster and it is kerberosed with tsl and ssl enabled. I am trying to use create an instance using the --jaas option in solrctl command but it is not working.
The solrctl command I am using is below
solrctl --jaas jaas-client.conf instancedir --create testindex3 /home/myuserid/testindex3
The jass-client.conf file is below
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="<PATH TO KEYTAB>/username.keytab"
storeKey=true
useTicketCache=false
debug=true
principal="fully.qualified.domain.name#<YOUR-REALM";
}
For the above command I am getting the error
Uploading configs from /user/myuserid/testindex3/conf to ZookeeperHost1:2181,ZookeeperHost2:2181,ZookeeperHost3:2181/solr. This may take up to a minute.
Debug is true storeKey true useTicketCache false useKeyTab true doNotPrompt false ticketCache is null isInitiator true KeyTab is /username.keytab refreshKrb5Config is false principal is fully.qualified.domain.name#
Error: can't upload configuration
I am not sure why the instance is not created and what i am missing. Is there anything wrong in my jaas.conf file. Please advise
Note: I haven't used "user/fully.qualified.domain.name#

how to set flink statsd reporter?

I read the doc and it can't work well,
I set the conf/flink-conf.yaml like that:
metrics.reporters: StatsDReporter
metrics.reporter.StatsDReporter.class: org.apache.flink.metrics.statsd.StatsDReporter
metrics.reporter.StatsDReporter.host: www.example.com
metrics.reporter.StatsDReporter.port: 8125
but I don't know and can't get where to import the statsDReporter?
is there some example to set it?

Transaction in Capistrano not working

I have an issue with transaction in Capistrano.
The error I am getting is:
NoMethodError: undefined method `transaction' for #<SSHKit::Backend::Netssh:0x24
08b20>
Capistrano version is: 3.4.0 (Rake Version: 10.4.2).
Code is as below:
namespace :fun do
desc "Sample showing rescue, ensure, and on_rollback inside a transaction"
task :stuff do
on roles :all do
transaction do
on_rollback { logger.debug "my rollback" }
begin
logger.debug "main"
# Either run or run_locally will work the same
# run_locally "false"
run "false"
rescue => e
logger.debug "rescue #{e.class}"
raise e
ensure
logger.debug "ensure"
end
end
end
end
end
Where is the issue?
The transaction keyword was removed in Capistrano 3.
The developers recommend using the new flow control to handle this case: https://github.com/capistrano/capistrano/issues/860

Command wasn't called in deploy file use Capistrano 3

I had a problem when i run cap production deploy.
I want to run a command restart unicorn server same as ubuntu service when cap production deploy finished.
But problem was here that it didn't run this command /etc/init.d/unicorn_my_app restart when command cap production deploy finished and the server didn't restart.
And i must ssh to server and start unicorn server /etc/init.d/unicorn_my_app restart manually, so i didn't know the reason.
I research a lot of documents and topics and try a lot, but i was unsuccessful. Hope everybody can help me. Thank you very much.
This is my deploy.rb
lock '3.4.0'
set :application, 'my_app'
set :repo_url, 'git#bitbucket.org:****/my_app.git'
set :branch, 'master'
set :rvm_type, :auto
set :rvm_ruby_version, '2.2.1#my_app'
set :bundle_gemfile, -> { release_path.join('Gemfile') }
set :bundle_dir, -> { shared_path.join('bundle') }
set :bundle_flags, nil
set :bundle_without, %w{development test}.join(' ')
set :bundle_binstubs, nil
set :bundle_roles, :all
set :app_path, '/var/www/my_app/current/'
set :scm, :git
set :pty, true
set :use_sudo, true
set :format, :pretty
set :log_level, :debug
set :keep_releases, 5
set :linked_files, fetch(:linked_files, []).push('config/database.yml', 'config/secrets.yml')
set :linked_dirs, fetch(:linked_dirs, []).push('log', 'tmp/pids', 'tmp/cache', 'tmp/sockets', 'vendor/bundle', 'public/system')
namespace :deploy do
after :restart, :clear_cache do
on roles(:web), in: :groups, limit: 3, wait: 10 do
end
end
task :restart_unicorn do
on roles(:all) do
begin
puts "INFO #{Time.now} starting unicorn"
execute "/etc/init.d/unicorn_my_app restart"
rescue => e
puts "ERROR #{Time.now} error with #{e.inspect}"
end
end
end
after :finishing, "deploy:cleanup", "deploy:restart_unicorn"
end
Update: #Deep: I tried add sudo /etc/init.d/unicorn_my_app restart and it didn't run anyway. Does anybody explain for me why it didn't run ? Thank you.

Resources