i installed vsftpd and got it running with user ftpuser. owner group of /var/www is set to ftpuser:ftpuser. I can upload view, edit and delete files, which is nice.
but a website can't do anything: e.g. can't upload files via php, can't run installer and stuff.
so i changed owner to www-data:www-data. Now i can upload files via http or update my wordpress.
but i can not change files via ftp anymore (550 Create directory operation failed).
i have added ftpuser to group www-data but still can't do anything on the server.
my vsftpd.conf
listen=NO
listen_ipv6=YES
anonymous_enable=NO
local_enable=YES
write_enable=YES
local_umask=022
dirmessage_enable=YES
use_localtime=YES
xferlog_enable=YES
connect_from_port_20=YES
chroot_local_user=YES
secure_chroot_dir=/var/run/vsftpd/empty
pam_service_name=vsftpd
force_dot_files=YES
pasv_min_port=40000
pasv_max_port=50000
allow_writeable_chroot=YES
2 different processes (apache and vsftp) have/run with different users and groups: files and directories have user ownership and group ownership so you need to configure proper permissions to let apache read/write in/on-to files and directories owned by vsftp (or vice versa according to how you configure permissions and groups).
A solution could be:
create a common group called, for example, 'web-manager'
change the group of the folder '/var/www' to web-manager (chgrp web-manager /var/www)
allow those who are in 'web-manager' group write into the '/var/www' folder (chmod 775 /var/www)
put apache and vsftp in the group 'web-manager' (usermod -a -G web-manager www-data; usermod -a -G web-manager ftpuser)
restart apache and vsftp daemons
Related
I'm trying to install an SSL certificate on my shared hosting by Plesk.
It worked before, but the renewal went wrong.
I finally uninstalled the certificate, but when I try to get a new one, I can't access the .well-known/acme-challenge folder.
I tried to put a test file inside but ends up with a 404 error.
If I place the file inside .well-known, I can access it.
If I rename the acme-challenge folder to acme2-challenge, I can access it.
What makes this specific acme-challenge file so protected, and where can I unprotect it?
There may be an Apache module or config that controls the directory. Search a config acmetool and the module md in the Apache board or in command line with grep -rinF acme /etc/apache2.
There are two common modules that manages acme, so to fix it, you may run sudo a2disconf acmetool or sudo a2dismod md, then regenerate certificate (you may have to wait one hour or one day if you reach the limit of Let's Encrypt).
To avoid a future issue, search for the package that modified the apache config too.
There is a mysterious folder being shared in an internal server with Ubuntu 18.04.2 LTS.
There is nothing pointing to that folder in the /etc/samba/smb.conf file. What happen was that this sharing configuration was made before I have access to that server and the person who made it curiously does not remember how did that.
How can I discover how was made that sharing?
# To access your network share
sudo apt-get install smbclient
# List all shares:
smbclient -L //<HOST_IP_OR_NAME>/<folder_name> -U <user>
# connect:
smbclient //<HOST_IP_OR_NAME>/<folder_name> -U <user>
To access your network share use your username (<user_name>) and password through the path` "smb:////" (Linux users) or "\\\" (Windows users). Note that "" value is passed in "[]", in other words, the share name you entered in "/etc/samba/smb.conf".
Note: The default user group of samba is "WORKGROUP".
i hope it might just do.
Might as well put this out there in the event that someone else comes across it. I also had a mysterious samba share that was not listed in the smb.conf.
It turns out I had a symbolic link that was either shared before or pointed to a folder that was shared. Deleting the symlink got rid of the samba share after a smbd service restart. Restoring the symlink from the trash caused the samba share to come back. Very mysterious indeed.
I got a Mesosphere-EE, and install on fedora 23 server (kernel 4.4)with:
$bash dcos_generate_config.ee.sh --web –v
then output:
Running mesosphere/dcos-genconf docker with BUILD_DIR set to/home/mesos-ee/genconf
Usage of loopback devices is strongly discouraged for production use.Either use `--storage-opt dm.thinpooldev` or use `--storage-opt
dm.no_warn_on_loop_devices=true` to suppress this warning.
07:53:46:: Logger set to DEBUG
07:53:46:: ====> Starting DCOS installer in web mode
07:53:46:: DCOS Installer v1
07:53:46:: Starting server ('0.0.0.0', 9000)
Then I start firefox though vnc, the vnc is on root. then:
07:53:57:: Root page requested. 07:53:57:: Serving/usr/local/lib/python3.4/site-packages/dcos_installer/templates/index.html
07:53:58:: Request for configuration type made.
07:53:58::Configuration file not found, /genconf/config.yaml. Writing new onewith all defaults.
07:53:58:: Error handling request
PermissionError: [Errno 13] Permission denied: '/genconf/config.yaml'
But I already have a genconf/config.yaml, it look like:
bootstrap_url: http://<bootstrap_public_ip>:<your_port>
cluster_name: '<cluster-name>'
exhibitor_storage_backend: zookeeper
exhibitor_zk_hosts: <host1>:2181,<host2>:2181,<host3>:2181
exhibitor_zk_path: /dcos
master_discovery: static
master_list:
- <master-private-ip-1>
- <master-private-ip-2>
- <master-private-ip-3>
superuser_username: <username>
superuser_password_hash: <hashed-password>
resolvers:
- 8.8.8.8
- 8.8.4.4
I do not know what’s going on. If you have any idear, please let me know, thank you very much!
Disable Selinux!
Configure SELINUX=disabled in the /etc/selinux/config file and then reboot!
Be ensure the selinux is disabled by the command getenforce.
$ getenforce
Disabled
zhe.
Correctly installing the enterprise edition depends on the correct system prerequisites. Anyway I suppose you're still on the bootstrap node so I will give you some path to succed in your current task.
Run the script as root or as a user issuing sudo dcos_generate_config.ee.sh
The script will also generate the config file automatically; if you want to use your own configuration file then create a folder named genconf and put it inside before running the script. You should changes the values inside <> with your specific configuration. If you need more help for your specific case send me an email to infofs2 at gmail.com
We have a web app. This web app is installed for each client of ours in a different folder in our VPS. We also have a separate folder with the base files of the web app (all code up to date).
The problem we're having is: we need to automate the update process of the web app for all client installations. Therefore, if we add files to the base web app, or move files, or create a directory, or remove a file or directory, these changes should be reflected automatically (applied to) on every client installation of the web app. Currently we're on beta and each code update results in a manual update of all files for each client installation using FTP, and the more changes done, the more time this process takes and the more complex it becomes.
Is there a tool available to automate this kind of process? Or if not, how do you suggest it should be approached?
/
/clients
/client1.domain.com
/[web app subfolders and files...]
/client2.domain.com
/[web app subfolders and files...]
/client3.domain.com
/[web app subfolders and files...]
/base_web_app
/[web app subfolders and files...]
So basically, each time we do any changes to the contents of /base_web_app, those changes should be automatically applied (sync) to the web app installations inside /clients (that is, /client1.domain.com, /client2.domain.com, /client3.domain.com).
It is also important to note that we need some files and/or subfolders to be ignored/not overwritten. Mainly configuration files specific to each client's installation.
Check out rsync: http://rsync.samba.org/examples.html It is a tool to synchronize files from one area to another (say your staging area to your production area). You can use patterns to specify what to sync and what to exclude, and it only copies changed files.
On your staging area (where you have the latest changes you want to sync), you could do something like this:
# sync staging area base_web_app directory to production base_web_app
# this syncs the entire local base_webapp directory to remote /base_webapp
rsync -avRc base_webapp server:/
# sync staging area base_web_app files to clients/client* directories, excluding the config directory
# this syncs the entire base_webapp to each remote client dir, excluding the config dir
rsync -avRc --exclude 'config/*' base_webapp server:/clients/client1.domain.com
rsync -avRc --exclude 'config/*' base_webapp server:/clients/client2.domain.com
rsync -avRc --exclude 'config/*' base_webapp server:/clients/client3.domain.com
I have CM Server for ClearCase Remote Clients in windows 2003 server.
Installed package in server.
I'm getting below error in CCRC client:
"CRVAP0087E CCRC command "checkin" failed: Unable to create pathname for
file "C:\ccweb\v973012\v973012_Latest_FMC": Permission denied"
Can you please help me to fix this issue?
The most efficient way to troubleshoot right issue is to go directly to the CCRC server, under the CCRC web (snapshot) view and type:
cd c:\ccweb\v973012
cleartool lsview -l -full -pro -cview
That way, you see with which group the user has created the CCRC view in the first place.
That group must be one of the groups associated with the Vob of the element being checked-in.
Check also this technote:
During the installation of Rational ClearCase, you are asked for a temporary directory to use during the installation to unzip large artifacts.
This temporary directory is used to preserve user configuration settings during an update or uninstall process.
If you specify a directory that is mounted on a file system that is separate from the installation directory, the file permissions and owners are not preserved when the files are moved across the file systems.
Finally, check for any trigger set on the Vob: they can have an unwanted side-effect with that CCRC 'checkin' operation.
In the specific case of the OP mth123, he suggests:
Go to the Windows Explorer and try these steps:
Go the the following directory: C:\ccweb
Rename the folder v123412 to V123412 (Only change the first letter and put "V" capitalized.)
Check if the problem is solved.
So depending on the actual tag of the view, this could be an issue about case-sensitive path.
The latest patch caused this issue. While applying patch CC preserving some file inclusing ccweb folder. while restoring those folder during post install these directory might have changed with lowercase/upper case. This is currently with IBM queue.
If you are going to upgrade CCRC please make a copy of ccweb directory with proper ACL or use ccopy.exe (clearcase utility)