Nginx React App - 403 Permissions Denied (only for some files) - reactjs

I will start off with a note that I have looked through an excessive amount of questions that have already been asked.
I have set up a droplet on Digital Ocean and currently trying to configure my React application to run on it with nginx. It was running fine before I decided to remove my static files and replace it with another folder (instead of replacing file-by-file... damn).
My nginx configuration user is nginx and I believe I've set the correct permissions for all static data in the directory where it reads from. However, I still think the problem lies with permissions and hoping someone could help to figure out the problem.
Permissions on the directories
nginx.conf has the following set: user nginx;
nginx running processes with what users are running them:
root 23264 1 0 13:11 ? 00:00:00 nginx: master process /usr/sbin/nginx
nginx 23265 23264 0 13:11 ? 00:00:00 nginx: worker process
Maybe some directories/files are to remain 'chowned' to root?
This is also the network result from my browser:
It looks as if some files can be accessed but the js minified files cannot be?

Right, finally found the answer to my own question...
For anyone who faces the same issue, the following command on static folder inside grade-ui fixed it:
chmod -R +rx css

Related

Keep getting Nginx 403 Forbidden error when accessing a react js application

please, I have a react js application that I want to deploy on my server (Centos 7), I have already generated the build and I have installed nginx on my server and I have created the folder www under /var/ where I have put the content of my build following the path : /var/www/merchant-dashboard/html.
and I have already allocated the 777 permissions for the www folder and all its subdirectories and files
I created my configuration file under /etc/nginx/conf.d and named it merchantDashboard.conf here is its content :
.
I have also set the permissions for the nginx user with the command
sudo chown -R nginx:nginx *.
(my user is called nginx) but I still get the 403 forbidden error.
Here is my error logs :
if someone can help me please
I followed this answer:
Why does Nginx return a 403 even though all permissions are set properly?
When I disabled SELinux it worked.
Following command from the above mentioned link solved the problem:
chcon -Rt httpd_sys_content_t /path/to/www

Can't access .well-known/acme-challenge folder

I'm trying to install an SSL certificate on my shared hosting by Plesk.
It worked before, but the renewal went wrong.
I finally uninstalled the certificate, but when I try to get a new one, I can't access the .well-known/acme-challenge folder.
I tried to put a test file inside but ends up with a 404 error.
If I place the file inside .well-known, I can access it.
If I rename the acme-challenge folder to acme2-challenge, I can access it.
What makes this specific acme-challenge file so protected, and where can I unprotect it?
There may be an Apache module or config that controls the directory. Search a config acmetool and the module md in the Apache board or in command line with grep -rinF acme /etc/apache2.
There are two common modules that manages acme, so to fix it, you may run sudo a2disconf acmetool or sudo a2dismod md, then regenerate certificate (you may have to wait one hour or one day if you reach the limit of Let's Encrypt).
To avoid a future issue, search for the package that modified the apache config too.

I'm getting the message "The page has expired due to inactivity" in Laravel 5.5

When I run the project locally, everything works perfect. But when I deploy the project in production, I get the message "The page has expired due to inactivity" every time I submit a form with POST mnethod.
There are many questions about this problem and I've tried every possible solution:
1- My form contains the token {{ csrf_field() }}
2- I've changed the name of my app (APP_NAME)
The session driver and the cache drive are set to 'file'.
I heard that maybe the storage file is not writable and this is where it stores session. I don't know how to check it, if my project is deployed in GCLoud (Google Cloud Platform).
Thanks
UPDATE 1
I posted my question in Laracast and someone said that it happens when it can't write on the storage/sessions file. When I deploy my project to GCloud, I don't know how to make it writable by the server.
My composer.json file has this config right now:
"post-install-cmd": [
"Illuminate\\Foundation\\ComposerScripts::postInstall",
"php artisan optimize",
"chmod -R 755 bootstrap\/cache"
]
UPDATE 2
For now I changed the SESSION_DRIVER to cookie and it works in production.
Change the SESSION_DRIVER in your app.yaml to cookies, like :
env_variables:
# Put production environment variables here.
APP_LOG: errorlog
APP_DEBUG: false
SESSION_DRIVER : cookie

Mesosphere installation PermissionError:/genconf/config.yaml

I got a Mesosphere-EE, and install on fedora 23 server (kernel 4.4)with:
$bash dcos_generate_config.ee.sh --web –v
then output:
Running mesosphere/dcos-genconf docker with BUILD_DIR set to/home/mesos-ee/genconf
Usage of loopback devices is strongly discouraged for production use.Either use `--storage-opt dm.thinpooldev` or use `--storage-opt
dm.no_warn_on_loop_devices=true` to suppress this warning.
07:53:46:: Logger set to DEBUG
07:53:46:: ====> Starting DCOS installer in web mode
07:53:46:: DCOS Installer v1
07:53:46:: Starting server ('0.0.0.0', 9000)
Then I start firefox though vnc, the vnc is on root. then:
07:53:57:: Root page requested. 07:53:57:: Serving/usr/local/lib/python3.4/site-packages/dcos_installer/templates/index.html
07:53:58:: Request for configuration type made.
07:53:58::Configuration file not found, /genconf/config.yaml. Writing new onewith all defaults.
07:53:58:: Error handling request
PermissionError: [Errno 13] Permission denied: '/genconf/config.yaml'
But I already have a genconf/config.yaml, it look like:
bootstrap_url: http://<bootstrap_public_ip>:<your_port>
cluster_name: '<cluster-name>'
exhibitor_storage_backend: zookeeper
exhibitor_zk_hosts: <host1>:2181,<host2>:2181,<host3>:2181
exhibitor_zk_path: /dcos
master_discovery: static
master_list:
- <master-private-ip-1>
- <master-private-ip-2>
- <master-private-ip-3>
superuser_username: <username>
superuser_password_hash: <hashed-password>
resolvers:
- 8.8.8.8
- 8.8.4.4
I do not know what’s going on. If you have any idear, please let me know, thank you very much!
Disable Selinux!
Configure SELINUX=disabled in the /etc/selinux/config file and then reboot!
Be ensure the selinux is disabled by the command getenforce.
$ getenforce
Disabled
zhe.
Correctly installing the enterprise edition depends on the correct system prerequisites. Anyway I suppose you're still on the bootstrap node so I will give you some path to succed in your current task.
Run the script as root or as a user issuing sudo dcos_generate_config.ee.sh
The script will also generate the config file automatically; if you want to use your own configuration file then create a folder named genconf and put it inside before running the script. You should changes the values inside <> with your specific configuration. If you need more help for your specific case send me an email to infofs2 at gmail.com

Server giving 404 not found

I am deploying a spring application which contains files with around 100000 entries. Each row in the file has about 23 chars.
The app deploys fine when a file has 100000 entries but when I increase the contents to 400000 entries, when I access my app url I get a 404 Not found error.
I need to figure out what causes the crash ( whether a memory problem or something else ) but I do not see anything erroneous in the tomcat log files, using the command vmc files [app_name] tomcat/logs/catalina.... just info messages related to server startup.
Are there other options to debug the issue?
Thanks,
Cristian
I would look into what Dan has mentioned! Also can you look at the logs folder to see if the files there give more information.
vmc logs <app-name>
or
vmc files <app-name> logs/stderr.log
vmc files <app-name> logs/stdout.log
Okay, the application was using too much memory, as a result the Java processes was being destroyed causing the router to return a 404 when trying to route to the application.

Resources