Set owner:group and 770 chmod to apache2 created files - apache2

I would like configure apache to create files with personalized owner:group and chmod.
I have an folder of website who need to be manipulated by apache + (ftp) user.
Actually i do an (where 'mygroup' is group of ftp users)
chown www-data:mygroup -R /my/website/files
chmod 770 -R /my/website/files
But when apache2 manipulate files and create files or folders, they have
-rw-r--r-- 1 www-data www-data
Any idea for configure apache2 ?
Edit: Debian 6

There is no real good way to do this AFAIK. Stock version of Apache doesn't have a mechanism to spawn workers under different users per request. All of its workers operate under the user and therefore can't write files as another.
That being said, there are some ways around this.
The first way will require you to run Apache as root. Apache, as it sits on your server, is running under an unprivileged user. Unprivileged users can't change the user that they run under. Only processes run as root can do that. If you are willing to run your Apache as root, there is a multi-process mod available here. What it does is allow you to run each VHOST under a different user (defined in your config). That means you would now also need to set up each user with their own VHOST. This way would work, but you are sacrificing a bit of security by doing this.
The second, more secure, but more "hacky" way to do it would be to run completely new and individual version of Apache for each user. So you have an Apache with its own set of config files JUST for userA, another Apache with its own different and separate set of configs just for userB, etc. Each instance of Apache could listen on a different port (i.e. userA's listens on port 8080, userB on port 8081...). Then you could use some kind of front end reverse proxy to sort it all out and route the traffic to the appropriate Apache instance.

Looks like you are working under openSUSE or SLES.
If so, take a look at the file /etc/apache2/uid.conf...
For the umask: not sure actually. What certainly works is to create a .profile file under the apache users home directory and set the umask in there. But I bet there is a more elegant solution!

Related

Authenticate local user without running as root

A server applicaton I wrote is running as root at the moment and authenticates local system users with getspnam() and crypt() which requires root privilege to access the shadow file. Now I want that application not to run as root in a production system. What are the alternatives without ever requiring root for authenticating local users? The application is running under debian at the moment but is written portable in general.
None of those files you read are supposed to be read by a userspace application. They are system files. The administrator is free to leave the files in place but inhibit their contents - a perfectly valid scenario - and there may well be user information that goes beyond what's in those files. Say, if the machine is joined to an Active Directory domain, or otherwise uses LDAP for authentication: the user list will come from the directory, with passwd having just the local system accounts and nothing else. System services need those files in /etc, and that's that - specifically, the PAM module that provides local accounts :)
Thus: use Pluggable Authentication Modules (PAM). You'll be using the public interface to PAM. Since PAM is cross-platform, it will work on other unices, say Solaris.

Nagios: Not able to write Performance data into file

I am trying to make the communication between Nagios and Graphite but couldn't able to write Nagios performance data to the file.
I am referring below mentioned sites:
http://nagios.manubulon.com/traduction/docs25en/perfdata.html
http://nagios.manubulon.com/traduction/docs14en/xpdfile.html
To configure nagios.conf
http://nagios.manubulon.com/traduction/docs25en/configmain.html#host_perfdata_file
Please give some details on the perfdata file.
Make shure the file is writeable by the user your nagios is running with.
For Example:
If you use /usr/local/nagios/var/host-perfdata.dat to store your host performance data and your nagios is running with the user nagios the permission would look like:
-rw-rw-r-- nagios nagios host-perfdata

Building a centralized configuration repository

I'm trying to develop an open source application to be sort like a centralized configuration management for all Unix platform like for example (changing root password, SSH configuration, DNS settings, /etc/hosts management.... and others).
I need your feedback for what do you recommend to use as the interface for all the configuration (list of scripts will be running in the Unix Servers as a clients to read the configuration and apply it in each system "Client===>to===>Server mode"
Should I use LDAP to host the configurations and any Unix OS can talk to the LDAP to get the configuration
or Should I just save the configuration in Database (e.g. MySQL) and build a web interface to read the database and print the configuration to the client ?
or you have any other idea?
You might look into something like Chef or Puppet instead. Why re-invent the wheel?
Curl can download a file from a URL and write that file to standard output. For example, executing curl -sS http://someHost/file.cfg will download "file.cfg" from the specified web server. The "-sS" options instruct Curl to print error messages but not any any progress diagnostics. By the way, Curl supports many protocols including HTTP, FTP and LDAP, so you have flexibility in the technology you want to use to host your centralised configuration repository (CCR).
You could use curl to retrieve a configuration file from the CCR, store the result in a local file and then parse that local file.
Check out Blueprint from DevStructure. It sounds like something along the lines of what you're trying to do. Basically it reverse engineers servers and detects everything that has changed from the install state. Open-source too.
https://github.com/devstructure/blueprint (Blueprint # Github)
We are also about to launch ConfigChief which is a central configuration repository that would do what you want: central point to store configuration (with all features like versioning, audit, ACL, inheritence, etc).
Once you have that, combined with change notification, you can just run a curl as Ciaran McHale says against the CCR and get your parsed configuration file back. This would eliminate the need for writing scripts to generate config files from the outside.
If you are interested, you can signup for a beta at http://woot.configchief.com
DISCLAIMER: I guess it is obvious from the first word!

Two way sync with rsync

I have a folder a/ and a remote folder A/.
I now run something like this on a Makefile:
get-music:
rsync -avzru server:/media/10001/music/ /media/Incoming/music/
put-music:
rsync -avzru /media/Incoming/music/ server:/media/10001/music/
sync-music: get-music put-music
when I make sync-music, it first gets all the diffs from server to local and then the opposite, sending all the diffs from local to server.
This works very well only if there are just updates or new files on the future. If there are deletions, it doesn't do anything.
In rsync there is --delete and --delete-after options to help accomplish what I want but thing is, it doesn't work on a 2-way-sync.
If I want to delete server files on a syn, when local files have been deleted, it works, but if, for some reason (explained after) I have some files that aren't in the server but exist locally and they were deleted, I want locally to remove them and not server copied (as it happens).
Thing is I have 3 machines in context:
desktop
notebook
home-server
So, sometimes, server will have files that were deleted with a notebook sync, for example and then, when I run a sync with my desktop (where the deleted server files still exist on) I want these files to be deleted and not to be copied again to the server.
I guess this is only possible with a database and track of operations :P
Any simpler solutions?
Thank you.
Try Unison: http://www.cis.upenn.edu/~bcpierce/unison/
Syntax:
unison dirA/ dirB/
Unison asks what to do when files are different, but you can automate the process by using the following which accepts default (nonconflicting) options:
unison -auto dirA/ dirB/
unison -batch dirA/ dirB/ asks no questions at all, and writes to output how many files were ignored (because they conflicted).
Note: I am no longer using Unison (I use NextCloud, which doesn't address the original use case). However, note that rsync is not designed for bidirectional sync, while unison is. unison may have its bugs (as any other piece of software) and its wrinkles. I am surprised it seems to be actively maintained now (last time I looked I think I thought it looked dead), but I'm not sure what's the state nowadays. I haven't had the need to have a two-way file synchronizer, so there may be better options, though.
Since the original question also involves a desktop and laptop and example involving music files (hence he's probably using a GUI), I'd also mention one of the best bi-directional, multi-platform, free and open source programs to date: FreeFileSync.
It's GUI based, very fast and intuitive, comes with filtering and many other options, including the ability to remote connect, to view and interactively manage "collisions" (in example, files with similar timestamps) and to switch between bidirectional transfer, mirroring and so on.
FreeFileSync can easily sync two computers on the same network and also sync two computers on different and remote networks.
On same network: have FreeFileSync use the local file system on one side and a shared network drive / path on the other. On Windows systems you enable file / disk sharing on one computer and access that share from the other. I use FreeFileSync this way to keep my main development PC source code synced with my 2 laptops.
I have also synced one of these laptops with a Linux server with Samba installed and sharing one of its directories.
Across networks: create a VPN and do the same as above. FreeFileSync will see the remote disk as it was on the local network. Or buy one router that allows you to connect a USB disk to it and share over the internet. I have installed a VPN on a remote Linux server and used it through the OpenVPN Windows client.
You could also try bitpocket: https://github.com/sickill/bitpocket
Try this,
get-music:
rsync -avzru --delete-excluded server:/media/10001/music/ /media/Incoming/music/
put-music:
rsync -avzru --delete-excluded /media/Incoming/music/ server:/media/10001/music/
sync-music: get-music put-music
I just test this and it worked for me. I'm doing a 2-way sync between Windows7 (using cygwin with the rsync package installed) and FreeNAS fileserver (FreeNAS runs on FreeBSD with rsync package pre-installed).
You might use Osync: http://www.netpower.fr/osync , which is rsync based with intelligent deletion propagation. it has also multiple options like resuming a halted execution, soft deletion, and time control.
You could try csync, it is the sync engine under the hood of owncloud.
I'm surprised no one has mentioned Syncthing yet. I have been using it for years to synchronize my phone, my tablet and my two laptops. One time I also used it to send 10 GB of photos to my family ~600 km away, straight from my machine to their machine, and it was incredibly fast (despite the data getting routed through Syncthing's discovery server to work around NAT issues). I also tried OwnCloud/NextCloud at some point but Syncthing has been much more reliable and, also, much faster.
I'm now using SparkleShare https://www.sparkleshare.org/
works on mac, linux and windows.
I'm not sure whether it works with two syncing but for the --delete to work you also need to add the --recursive parameter as well.
Rclone is what you are looking for. Rclone ("rsync for cloud storage") is a command line program to sync files and directories to and from different cloud storage providers including local filesystems. Rclone was previously known as Swiftsync and has been available since 2013.

Hosting Multiple Domains on Same Server Port with Apache2

How do I configure Apache2 via webmin or command-line (I'm using RHEL5 Linux) so that I can have multiple domains on the same server on the same port but in different subdirectories?
For instance, trying to get homerentals.ws and homerepair.ws to be detected on port 80 (default port) on the same server. I know that my DNS holds the two addresses and web hits currently go to the same test page. Now all I need is for web hits to go to a subdirectory, but not show this subdirectory. For instance, I do not want people going to http://homerentals.ws and being redirected back to http://homerentals.ws/homerentals/. Instead, http://homerentals.ws would go to /var/www/html/homerentals, while http://homerepair.ws would go to var/www/html/homerepair, but would not look any differently in the URL.
On IIS, I did this once with host-header detection. But I don't know how to do it on RHEL5 Linux via webmin or file editing. I'm stuck.
The feature you're describing is known as virtual hosts. Have a look at Apache's documentation. In general you need to edit /etc/apache2/httpd.conf file to make things happen (maybe it can be edited through webmin, but I'm not familiar with it).

Resources