I am using Parallels Plesk 12.0.18
The proble is that when i try to open httpdocs it warn me [Unable to open the directory: Access denied] and i can't open the directory, any solutions?
What is the permission and ownership of httpdocs directory. Please update the correct ownership to httpdocs directory and check it again
Related
I would like to transfer file from my local machine to Google cloud instance. Here is my command:
gcloud compute scp "C:\Temp\esim_replication.ipynb" nlp-3:
Here is error message:
pscp: unable to open ./esim_replication.ipynb: permission denied
ERROR: (gcloud.compute.scp) [C:\Program Files (x86)\Google\Cloud SDK\google-clou
d-sdk\bin\sdk\pscp.exe] exited with return code [1].
This is brand new error. Everything worked fine 2 weeks ago. I am on Windows 7 locally and ran cmd as Administrator. I tried the above command with and without quotations.
Any suggestions?
Go to gcloud via ssh:
gcloud beta compute ssh --zone "your_zone" "instance_name" --project "project_name"
Give full access to your file:
sudo chmod 777 esim_replication.ipynb
In case someone finds this like I did: I had a similar error message, and what did the trick for me was using sudo: sudo gcloud compute scp [LOCAL] [REMOTE]. Apparently there was the need for updating the project ssh metadata (even though copying in the other direction worked just fine).
Encountered the same error while transferring from my local Windows desktop to Debian VM in GCP.
Changed the permission of the destination folder to 777.
gcloud compute scp source_folder/File1.txt VM_instance_name:destination_folder
It worked!
What? from a windows machine?
'sudo' is not recognized as an internal or external command,
operable program or batch file.
It turned out that I already had identically named file at destination. This caused the error. But Patrick W comment is very helpful
how to create symlink from public/storage to storage/app/public in homestead on window.
and how do i access my files from browser, so if i visit that image url via browser then it will show that particular image.
actually i'm building an api which accessible from any domain , so i have to return the url of that particular image which is uploaded. so it will be shown of front end.
i'm also saving path to database which is - storage/app/public/image.png . what should i do now..
I'm new to file system so may be i need steps by step instruction.
i will be so thankful for the help
I had the same problem creating a symbolic link from the "public" folder to a location on the "storage" directory.
I tried to use "mklink /j" as well to create the symlink. However, when I "vagrant ssh" to the virtualbox, I found that it did not actually create the link correctly. Attempting to "cd" to the created link would cause an error. Also it wasn't shown as a symlink in the usual linux notation.
To allow the symlink to be created in the virtualBox:
Open "Local Group Policy Editor".
Go to: Computer Configuration | Windows Settings | Security Settings | Local -Policies | User Rights Assignment
Find the "Create symbolic links" policy and add your logged in user to it.
Restart your host windows machine. ssh to your virtualbox. You may need to run "vagrant up" as an Administrator by opening your CMD using "Run as administrator" option.
Go to your "public" folder, and create your symbolic link using the linux "ln -s" command. It should work now.
I was using Windows 10, but the above should be the same for Windows 7.
The "Create symbolic links" policy may be located somewhere slightly different for earlier versions of Windows.
Credit should go to this blog: Symlink support in Windows and Virtualbox
I tried this and it works. Running homestead on VirtualBox on Windows 10:
Open cmd as administrator
Click Start->Run
Type 'cmd', and press ctrl-shift-enter
Select 'yes' from the pop-up window
Type the following command:mklink /D c:\<project_directory>\public\storage "/home/vagrant/<project_directory>/storage/app"
Then the storage/app will be accessible from public/storage in Homestead VM. Note that this is assuming C:\ is shared as /home/vagrant in Homestead VM.
On a MacOS you need to go to your Homestead folder and:
run: vagrant ssh
navigate to your project root folder
run: php artisan storage:link
And you are done.
If you do this without ssh to vagrant then it will not work.
I've got a problem with the Openbravo module management. When I want to rebuild, the Openbravo drop that alert message.:
No write permissions to Openbravo folder. Tomcat is not able to write in this folder. Please change permissions.
Can You help me? I don't know which folder that.
I'm posting this answer keeping in mind that you are working on Linux system.
This problem occurred because your project don't have openbravo permission.
In your machine there is an openbravo user, so now you need to give the folder openbravo permission to do that go to the folder where you install openbravo for example.
cd /opt/
know use ll command and change project owner by running this two command
chown -R openbravo:tomcat6 projectName
after that use this command
chmod -R 775 projectName
My sqlite db file is this: unable to open database file i chowned all folders until my dbfile to root. but i am still getting this error. but i remember that while creating my django project on server, i created a superuser, and now if i do ls -l i see that the user is that superuser. how is it possible to tell apache that this superuser should have that right to write/read the db file? or how to solve the problem, i am not apache/linux guru..
Execute chown www-data:www-data directory on the directory you want apache to be able to write to.
You should be able to just leave the file as owned by the super user and just change the group so that apache can read/write it as well.
Change the group for the sqlite file and the containing directory. Try this:
cd <directory with sqlite file>
sudo chgrp www-data . <sqlitefile>
You can find write group and www user and change permissions.
Say: cat /etc/passwd - for find right user, It may be apache or http or www.
And say to terminal: cat /etc/group -for find right group.
In my system group=apache, user = apache.
I am working on a CakePHP 2 project. It originally started out in 2.0.x and then recently migrated to 2.1.0. Throughout the whole development process, I have been receiving the error message below.
It pops up at the top of the page unpredictably. It can be when I am just viewing different pages, or even after I add a record to the database (yet the record properly saves).
Warning:
SplFileInfo::openFile(/var/www/cake_prj/app/tmp/cache/persistent/cake_core_cake_console_):
failed to open stream:
Permission denied in
/var/www/cake_prj/lib/Cake/Cache/Engine/FileEngine.php on line 293
I recursively set the owner and group of the tmp folder to apache, and still received the message. In addition, I then recursively set the permissions to read, write, and execute for all (chmod 777). The error message still pops up.
Even after changing both the owner, group, and permissions, the file in question:
cake_prj/app/tmp/cache/persistent/cake_core_cake_console_
will have its owner and group set back to root, and its permissions set back to default.
What could be causing this problem? Is there a way to ensure that every time this file is generated, that it will always have be apache:apache with read/write/execute permissions?
You can resolve this by adding a mask to your config in core.php
Cache::config('default', array(
'engine' => 'File',
'mask' => 0666,
));
There was a bug report there http://cakephp.lighthouseapp.com/projects/42648/tickets/2172 but it was considered as not being a bug.
What I personaly noticed is that some file owner may be modified when you use the cake script in the console (for instance to make a bake). The modified files then belong to the user you use in the console.
Would this mean you call cake while being root ? Or do you have any root cron job that calls a Cake shell script ?
Personaly I have now the habit to chmod the whole tmp folder content back to the apache user after having used the cake script and it seems to prevent the warning to appear.
Instead of setting giving read/write access to everyone on the tmp/cache directory I did this:
chgrp -R www-data app/tmp
chmod -R g+rw app/tmp
find app/tmp -type d -exec chmod g+s {} \;
Setting the group of the directories to the Apache user and then setting the setgid bit will allow you to ensure that files created in that directory get the proper group permissions regardless of what user runs the shell script. This also allows you to exclude read/write permissions to "other" users.
I think the reason of the problem is already explained, as the cron runs under root user and created files in tmp are not accessible by web user. The other solutions did not work for me and I did not want to set tmp permissions to 777, I ended up setting a cron job for the web user, in debian specifically it would be
crontab -u www-data -e
Taken from this answer How to specify in crontab by what user to run script?
If you're encountering the SplFileInfo error in CakePHP2 and you're absolutely certain that your file/directory permissions are set up properly, then one other thing to check is your PHP version. Cake2 requires PHP 5.2.8 or greater and although you'd usually be alerted on the default page if you were using the wrong version, you wouldn't be alerted if you'd developed your app on one server and then moved it to another.
I experienced this error after developing a Cake2 app on a PHP5.3 server and then moving it to a PHP 5.1 server. Upgrading to 5.2.17 (which is above 5.2.8) solved the problem.
Use this ..
cd cakephp/app/tmp/cache/persistent
sudo chmod 666 myapp*
cd ..
cd models
sudo chmod 666 myapp*
You need to make the app/tmp directory writable by the webserver. Find out what user your webserver runs as (in my case _www) and change the ownership of the app/tmp directory to that user: $ chown -R _www app/tmp
Another solution. Permission conflicting occurred because multi users share same files. Thus, if we split cache directory into multi sub directories, no conflicting occur and no changing default permission of directories and files required.
As following, each sub cache directory is defined by type of php api handler:
define('CACHE', TMP . 'cache' . DS . php_sapi_name() . DS);
When browser the website, active user is apache. And the sub
directory is cache/apache2handler.
When run a batch, active user is root or logging-in user.
And the sub directory is cache/cli.
Other side, current user account can be used to name sub directory. Check at
How to check what user php is running as?