How to copy a file to a bunch of servers with capistrano - file

I use cap invoke a lot to run commands on a bunch of servers. I would like to also use capistrano to push a single file to a bunch of servers.
At first I thought that PUT would do it, but put makes you create the data for the file. I don't want to do this, I just want to copy an existing file from the machine where I'm running the capistrano comand to the other machines.
It would be cool if I could do something like this:
host1$ cap HOSTS=f1.foo.com,f2.foo.com,f3.foo.com COPY /tmp/bar.bin
I would expect this to copy host1:/tmp/bar.bin to f1.foo.com:/tmp/bar.bin and f2.foo.com:/tmp/bar.bin and f3.foo.com:/tmp/bar.bin
This kind of thing seems very useful so I'm sure there must be a way to do this...

upload(from, to, options={}, &block)
The upload action stores the file at the given path on all servers targeted by the current task.
If you ever used the deploy:upload task before, then you might already know how this method works. It takes the path of the resource you want to upload and the target path on the remote servers.
desc "Uploads CHANGELOG.txt to all remote servers."
task :upload_changelog do
upload("#{RAILS_ROOT}/CHANGELOG.txt", "#{current_path}/public/CHANGELOG")
end
source

This uploads all files to the respective servers.
cap deploy:upload FILES=abc,def

Show all tasks:
cap -T
cap deploy # Deploys your project.
cap deploy:check # Test deployment dependencies.
cap deploy:cleanup # Clean up old releases.
cap deploy:cold # Deploys and starts a `cold'...
cap deploy:create_symlink # Updates the symlink to the ...
cap deploy:migrations # Deploy and run pending migr...
cap deploy:pending # Displays the commits since ...
cap deploy:pending:diff # Displays the `diff' since y...
cap deploy:rollback # Rolls back to a previous ve...
cap deploy:rollback:code # Rolls back to the previousl...
cap deploy:setup # Prepares one or more server...
cap deploy:symlink # Deprecated API.
cap deploy:update # Copies your project and upd...
cap deploy:update_code # Copies your project to the ...
cap deploy:upload # Copy files to the currently...
cap deploy:web:disable # Present a maintenance page ...
cap deploy:web:enable # Makes the application web-a...
cap integration # Set the target stage to `in...
cap invoke # Invoke a single command on ...
cap multistage:prepare # Stub out the staging config...
cap production # Set the target stage to `pr...
cap shell # Begin an interactive Capist...
You could use:
cap deploy:upload
See:
https://github.com/capistrano/capistrano/wiki/Capistrano-Tasks#deployupload

Anyone coming here who doesn't have cap deploy:upload can try using cap invoke to pull the file instead of pushing it. For example:
cap invoke COMMAND='scp host.where.file.is:/path/to/file/there /target/path/on/remote`

Related

MariaDB starts without errors and doesn't run

I start my mariadb with
/etc/init.d/mysql start
Then i get
starting MariaDB database server mysqld
No more messages.
When i call
service mysql status
i get
MariaDB is stopped
Why ?
my my.cnf is:
# Example mysql config file.
[client-server]
socket=/tmp/mysql-dbug.sock
port=3307
# This will be passed to all mysql clients
[client]
password=XXXXXX
# Here are entries for some specific programs
# The following values assume you have at least 32M ram
# The MySQL server
[mysqld]
temp-pool
key_buffer_size=16M
datadir=/etc/mysql/data
loose-innodb_file_per_table
[mariadb]
datadir=/etc/mysql/data
default-storage-engine=aria
loose-mutex-deadlock-detector
max- connections=20
[mariadb-5.5]
language=/my/maria-5.5/sql/share/english/
socket=/tmp/mysql-dbug.sock
port=3307
[mariadb-10.1]
language=/my/maria-10.1/sql/share/english/
socket=/tmp/mysql2-dbug.sock
[mysqldump]
quick
max_allowed_packet=16M
[mysql]
no-auto-rehash
loose-abort-source-on-error
Thank you for your help.
If your SELinux is set to permissive, please try to adjust the permissions :
Files in /var/lib/mysql should be 660.
/var/lib/mysql directory should be 755, Any of its subdirectories should be 700.
if your SELinux is set to enforcing, Please apply the right context.

How to set SplitMasterWorker value as false in giraph

I try to execute the giraph custom code from eclipse IDE, and when i try to run the code its showing Exception in thread “main” java.lang.IllegalArgumentException: checkLocalJobRunnerConfiguration: When using LocalJobRunner, must have only one worker since only 1 task at a time!
So i want to set the giraph.SplitMasterWorker=false.How to set it and where to set it?
pass -ca giraph.SplitMasterWorker=false to your application as an argument.
If you are running giraph on a single node cluster, then paste "-ca giraph.SplitMasterWorker=false" would help. However, if you try to run giraph on multi-nodes cluster such as AWS EC2 base on hadoop version 2.x.x, then I definitely recommend to modify the mapred-site.xml file adding parameter such mapred.job.tracker value in it.
giraph.SplitMasterWorker=false is the variable you have to set while calling the giraph runner. This can be passed in as a custom variable under -ca. Also I think you are using -w parameter, if you running on your local machine it should not be more than 1 since there are no slave nodes to work as a worker
E.g. hadoop jar /usr/local/giraph1.0/giraph-examples/target/giraph-examples-1.1.0-for-hadoop-2.7.0-jar-with-dependencies.jar org.apache.giraph.GiraphRunner org.apache.giraph.examples.ConnectedComponentsComputation -vif org.apache.giraph.io.formats.IntIntNullTextInputFormat -vip -vof org.apache.giraph.io.formats.IdWithValueTextOutputFormat -op -w 5 -ca giraph.SplitMasterWorker=false

Auto-Running a C Program on Raspberry PI

how can I make my C code to auto-run on my Raspberry PI? I have seen a tutorial so as to achieve that but I do not really know what I am still missing. My initialization script is shown as it follows:
#! /bin/sh
# /etc/init.d/my_settings
#
# Something that could run always can be written here
### BEGIN INIT INFO
# Provides: my_settings
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# X-Interactive: true
# Short-Description: Script to start C program at boot time
# Description: Enable service provided by my_settings
### END INIT INFO
# Carry out different functions when asked to by the system
case "$1" in
start)
echo "Starting RPi Data Collector Program"
# run application you want to start
sudo /home/pi/Documents/C_Projects/cfor_RPi/charlie &
;;
stop)
echo "Killing RPi Data Collector Program"
# kills the application you want to stop
sudo killall charlie
;;
*)
echo "Usage: /etc/init.d/my_settings {start | stop}"
exit 1
;;
esac
exit 0
The problem is that my program does not run at boot time and I do not really know why. What would I be missing? Is this "killall" statement "killing" some useful process during execution time? I am making this code to run as a background application but I know that after a few seconds, when the RPi is initializing, it asks for an username and a password in order to initialize the session. Is it possible that my RPi is not executing this code because I am not providing the logging information? I do not have a monitor so that my program has to run once I plug my Rpi in. Thanks a lot in advance!!
You'll have to create links to that init script in the proper /etc/rcX.d folders. On raspbian this is done by:
sudo update-rc.d YOUR_INIT_SCRIPT_NAME defaults
You can read this debian how-to for further information. Also you should read more about run levels in Debian.
How scripts/services are run at startuptime, generally depends on the type of init system used. Off the top of my head, I'd distginguish the following 4 types:
Embedded style: A single shell script has all the commands to start the system. Usually the script is at one off the paths the kernel tries to start as init process.
BSD style
System V style: This uses /etc/inittab and latr scripts in /etc/rc*.d/ to start services one by one
systemd
Raspbian dervices from Debian, so I suppose System V style. You have to symlink your script to /etc/rc2.d like
ln -s /etc/init.d/your-script /etc/rc2.d/S08my-script
Not the structure of the link name: It says, it should be started when the run level is entered, and the '08' determines the position (do a ls /etc/rc2.d/ to see the other links).
More details: init(8).
update-rc.d(8) is the proper wway to create the symlinks on debian. See the manpage:
update-rc.d - install and remove System-V style init script links
I advice to read at least the man pages update-rc.d(8) and init(8).
http://www.akeric.com/blog/?p=1976
Here a tutorial on how to auto-loggin and start a script at boot.
If it still don t work, there s either a problem in your script or in your C program.

tried to add a new update.secondary hook to my repos in gitolite and now git push fails

remote: Undefined subroutine &main::repo_rights called at hooks/update line 41.
remote: error: hook declined to update
I have removed the update hook from all of my repos in order to get around this, but I know that they are now wide open.
I ran gl-setup, and I may have mixed versions of gitolite on my machine. I am afraid that I ran the gl-setup from a version that is different than the one I am running currently. I am not sure how to tell. Please help. :-(
Update, for a more recent version of Gitolite (namely a V3.x or more), the official documentation would be: "adding your own update hooks", and it uses VREFs (virtual refs).
add this line in the rc file, within the %RC block, if it's not already present, or uncomment it if it's already present and commented out:
LOCAL_CODE => "$ENV{HOME}/local",
copy your update hook to a subdirectory called VREF under this directory, giving it a suitable name (let's say "crlf"):
# log on to gitolite hosting user on the server, then:
cd $HOME
mkdir -p local/VREF
cp your-crlf-update-hook local/VREF/crlf
chmod +x local/VREF/crlf
in your gitolite-admin clone, edit conf/gitolite.conf and add lines like this:
- VREF/crlf = #all
to each repo that should have that "update" hook.
Alternatively, you can simply add this at the end of the gitolite.conf file:
repo #all
- VREF/crlf = #all
Either way, add/commit/push the change to the gitolite-admin repo.

How to create patch for a new file?

I know to create a patch for an existing file is easy:
diff -aru oldFile newFile 2>&1 | tee myPatch.patch
But what to do, if i want to create a patch for a totally new file? Assume my file is residing in a folder called TestDir. Earlier TestDir did not have a file called entirelyNewfile.c, but now it is having the same.
How to create a patch for entirelyNewfile.c? The idea is, the patch should get properly applied to the specs and generate the RPM build. With BUILD dir having this new file.
Just to add: if i try to take diff between the two directories, one having the new file and the other missing the same, to create the patch, it generates an error saying that file is only present in one folder
Add -N to the diff arguments.
diff /dev/null <newfile>
Will create a patch for your newfile.
The easiest way to do this that I know is to put all the files under version control (if they aren't already). I prefer Git, but something similar could be done in any other version control system:
git init
git add .
git commit -m "initial state"
<do your edits here>
git add .
git commit -m "new state"
git diff HEAD^1

Resources