Sync web app files across client installations - file

We have a web app. This web app is installed for each client of ours in a different folder in our VPS. We also have a separate folder with the base files of the web app (all code up to date).
The problem we're having is: we need to automate the update process of the web app for all client installations. Therefore, if we add files to the base web app, or move files, or create a directory, or remove a file or directory, these changes should be reflected automatically (applied to) on every client installation of the web app. Currently we're on beta and each code update results in a manual update of all files for each client installation using FTP, and the more changes done, the more time this process takes and the more complex it becomes.
Is there a tool available to automate this kind of process? Or if not, how do you suggest it should be approached?
/
/clients
/client1.domain.com
/[web app subfolders and files...]
/client2.domain.com
/[web app subfolders and files...]
/client3.domain.com
/[web app subfolders and files...]
/base_web_app
/[web app subfolders and files...]
So basically, each time we do any changes to the contents of /base_web_app, those changes should be automatically applied (sync) to the web app installations inside /clients (that is, /client1.domain.com, /client2.domain.com, /client3.domain.com).
It is also important to note that we need some files and/or subfolders to be ignored/not overwritten. Mainly configuration files specific to each client's installation.

Check out rsync: http://rsync.samba.org/examples.html It is a tool to synchronize files from one area to another (say your staging area to your production area). You can use patterns to specify what to sync and what to exclude, and it only copies changed files.
On your staging area (where you have the latest changes you want to sync), you could do something like this:
# sync staging area base_web_app directory to production base_web_app
# this syncs the entire local base_webapp directory to remote /base_webapp
rsync -avRc base_webapp server:/
# sync staging area base_web_app files to clients/client* directories, excluding the config directory
# this syncs the entire base_webapp to each remote client dir, excluding the config dir
rsync -avRc --exclude 'config/*' base_webapp server:/clients/client1.domain.com
rsync -avRc --exclude 'config/*' base_webapp server:/clients/client2.domain.com
rsync -avRc --exclude 'config/*' base_webapp server:/clients/client3.domain.com

Related

Struggeling with vsftpd / apache2 on ubuntu 20.04

i installed vsftpd and got it running with user ftpuser. owner group of /var/www is set to ftpuser:ftpuser. I can upload view, edit and delete files, which is nice.
but a website can't do anything: e.g. can't upload files via php, can't run installer and stuff.
so i changed owner to www-data:www-data. Now i can upload files via http or update my wordpress.
but i can not change files via ftp anymore (550 Create directory operation failed).
i have added ftpuser to group www-data but still can't do anything on the server.
my vsftpd.conf
listen=NO
listen_ipv6=YES
anonymous_enable=NO
local_enable=YES
write_enable=YES
local_umask=022
dirmessage_enable=YES
use_localtime=YES
xferlog_enable=YES
connect_from_port_20=YES
chroot_local_user=YES
secure_chroot_dir=/var/run/vsftpd/empty
pam_service_name=vsftpd
force_dot_files=YES
pasv_min_port=40000
pasv_max_port=50000
allow_writeable_chroot=YES
2 different processes (apache and vsftp) have/run with different users and groups: files and directories have user ownership and group ownership so you need to configure proper permissions to let apache read/write in/on-to files and directories owned by vsftp (or vice versa according to how you configure permissions and groups).
A solution could be:
create a common group called, for example, 'web-manager'
change the group of the folder '/var/www' to web-manager (chgrp web-manager /var/www)
allow those who are in 'web-manager' group write into the '/var/www' folder (chmod 775 /var/www)
put apache and vsftp in the group 'web-manager' (usermod -a -G web-manager www-data; usermod -a -G web-manager ftpuser)
restart apache and vsftp daemons

Meteor Server-Only Files and Temporary Downloads

In Meteor, are there any folders where I can put a .zip which will not be sent to the client?
Secondary question: how can I make temporary download links on the app, which self-destruct after a period of time?
The idea is that only the server will have access to this file. /server doesn't seem to work because any files I place in there that are not code are not included in the final bundle.
My Solution - Heroku Filesystem
This is probably not the best solution to this problem - however, to anyone else that needs to have files bundled with the app which cannot be seen by the client, here's how I did it.
Note that deleting the secure files is done because Heroku does not persist filesystem changes on restart.
Place files in a folder named "securefiles" or similar in your /public folder.
These get compiled to a folder named /static in the bundle. Note that if you're using the Heroku buildpack, the actual path to the working directory for the server is /app/.meteor/heroku_build/app/.
Next, on server start, detect if the app is bundled or not. You can do this by checking for the existence of the static folder, and there's probably other files unique to a bundle as well.
If you're bundled, copy the files out of public using ncp. I've made a meteorite package just for this purpose, use mrt add ncp to add the node copy tool to your project. I recommend copying to the root directory of the app, as this is not visible to clients.
Next, delete the folder from static.
At this point you have files which can only be accessed by the server. Here's some sample coffeescript to do this:
Meteor.startup ->
fs = __meteor_bootstrap__.require 'fs'
bundled = fs.existsSync '/app' #Checking /app because on heroku app is stored in root / app
rootDir = if bundled then "/app/.meteor/heroku_build/app/" else "" #Not sure how to get the path to the root directory on a local build, this is a bug
if fs.existsSync rootDir+"securefiles"
rmDir rootDir+"securefiles"
#Do the same with any other temporary folders you want to get rid of on startup
#now copy out the secure files
ncp rootDor+'static/securefiles', rootDir+'securefiles', ()->
rmdir rootDir+'static/securefiles' if bundled
Secure/Temporary File Downloads
Note this code has dependencies on the random package and my package ncp
It's very easy to add on to this system to support temporary file downloads, as I have done in my project. Here's how, run url = setupDownload("somefile.rar", 30) to create a temporary file download link.
setupDownload = (dlname, timeout) ->
if !timeout?
timeout = 30
file = rootDir+'securefiles/'+dlname
return '' if !fs.existsSync file
dlFolder = rootDir+'static/dls'
fs.mkdirSync dlFolder if !fs.existsSync dlFolder
dlName = Random.id()+'.rar' #Possible improvement: detect file extension
dlPath = dlFolder+'/'+dlName
ncp file, dlPath, () ->
Fiber(()->
Meteor.setTimeout(() ->
fs.unlink dlPath
, 1000*timeout)
).run()
"/dls/"+dlName
Perhaps I will make a package for this. Let me know if you could use something like that.

Google App Engine: How to perform a remote deploy to dev app server?

I am in the process of setting up a "QA environment" for my GAE app. This QA environment will simply be a small server on my home network with a dedicated IP address. I'm writing an Ant script to check the project out of my SVN repo, build it on my build server, and then deploy it "remotely" (across my home LAN) to the QA app server.
With Tomcat, I would just scp the web archive to the machine's webapps/ directory, and since it can be configured to hot-deploy, that is all I usually need for a QA deploy.
But I'm new to GAE, and so I'm not seeing how I can achieve such a remote deployment via Ant. The best I can think of (although somewhat convoluted) would be:
Checkout and build the WAR on the buildserver, like I normally would
scp the WAR to a staging directory, somewhere on the QA machine; say 192.168.1.55:/opt/gae/staging
Have a lightweight RESTful web service running on that machine (maybe hosted by Tomcat or Jetty) listening for a client to hit a certain API, say http://192.168.1.55:8080/GaeRemoteApi/deploy; when the request handler gets a request for this URL, it kicks off a shell command to copy the WAR into the correct directory and then execute appcfg.sh -upload to actually deploy the WAR to my QA app server
I'm pretty sure I could get this working within a day or two, but was wondering if the GAE ships with an easier (baked in) solution; or if a fresh set of eyes can think of something even simpler. Thanks in advance!
I think you should just keep it simple:
Since you are on Ubuntu, you can write a shell script that will:
ssh to the remote server
stop the current gae dev appserver
rename the existing war directory
scp the new deployment to the QA server war directory
ssh to the QA server and start the gae dev appserver
You can call a shell script from ant using: http://sumedha.blogspot.com.au/2008/06/how-to-call-shell-script-from-ant.html
To stop the dev appserver:
killall -e ./appengine-java-sdk/bin/dev_appserver.sh
To run the dev appserver:
nohup ./appengine-java-sdk/bin/dev_appserver.sh you/war/directory &
Run the development server?
https://developers.google.com/appengine/docs/java/tools/devserver

How exactly does Tomcat run out of CATALINA_HOME and CATALINA_BASE

I'm having trouble finding documentation regarding this. After some googling I find that bin, conf,logs, temp, webapps, work are directories that should exist in CATALINA_BASE.
temp, logs, webapps, bin and work I don't have any trouble understanding.
bin I suppose is just another bin folder, if for some reason both CATALINA_HOME and CATALINA_BASE are in PATH, then scripts in both folders will be available for execution.
But how about conf? Will the content of CATALINA_HOME/conf be totally ignored if CATALINA_BASE is set? Suppose I only would need to customize only a few config files pr. CATALINA_BASE, would I still need to keep a complete set of config files in CATALINA_BASE/conf, or could the standard config files in CATALINA_HOME/conf be shared?
And ditto for CATALINA_BASE/lib ... would this work as a "global" lib folder pr. instance?
You can find the answer in the Tomcat documentation:
http://tomcat.apache.org/tomcat-6.0-doc/RUNNING.txt
Advanced Configuration - Multiple Tomcat Instances
In many circumstances, it is desirable to have a single copy of a
Tomcat binary distribution shared among multiple users on the same
server. To make this possible, you can set the $CATALINA_BASE
environment variable to the directory that contains the files for your
'personal' Tomcat instance.
When you use $CATALINA_BASE, Tomcat will calculate all relative
references for files in the following directories based on the value
of $CATALINA_BASE instead of $CATALINA_HOME:
bin - Only setenv.sh (*nix), setenv.bat (windows) and tomcat-juli.jar
conf - Server configuration files (including server.xml)
logs - Log and output files
webapps - Automatically loaded web applications
work - Temporary working directories for web applications
temp - Directory used by the JVM for temporary files (java.io.tmpdir)
Note that by default Tomcat will first try to load classes and JARs
from $CATALINA_BASE/lib and then $CATALINA_HOME/lib. You can place
instance specific JARs and classes (e.g. JDBC drivers) in
$CATALINA_BASE/lib whilst keeping the standard Tomcat JARs in
$CATALINA_HOME/lib.
If you do not set $CATALINA_BASE, $CATALINA_BASE will default to the
same value as $CATALINA_HOME, which means that the same directory is
used for all relative path resolutions.

How do I change the default location of the log files for GAE's bulkloader?

While working on my GAE project under my dev environment, whenever I upload data to my dev datastore, the logfiles are stored in my current directory, for instance:
C:\dev\ls
bulkloader-log-20090912.104643
bulkloader-log-20090912.104648
bulkloader-log-20090912.104731
bulkloader-log-20090912.105526
bulkloader-log-20090912.110428
bulkloader-progress-20090912.104648.sql3
bulkloader-progress-20090912.104731.sql3
bulkloader-progress-20090912.105526.sql3
bulkloader-progress-20090912.110428.sql3
project
project is my GAE app. The above is generated when I run the command appcfg.py upload_data. Is there a way to tell GAE where to store those log files, for instance in a log folder.
Use the --log_file=... option to appcfg.py, as documented here: with this command line option you can give the complete path to the log file, including folder and name. (You cannot give JUST the folder and let it figure out the name; for that, you need to write a tiny script that figures out the name then calls appcfg.py).

Resources