I'm working on getting a three-tier application running on Heroku. The backend is a Flask API with a React frontend. All works well locally and (ideally) I am trying to run both on the same server.
On deployment, I can access (public) API routes. However, at the index route I receive:
applicationName.herokuapp.com/:1
Failed to load resource: the server responded with a status of 404 (NOT FOUND)
My directory setup is:
Deployment
+-- __pycache__
+-- venv
+-- .env
+-- .gitignore (currently empty)
+-- app.py
+-- config.py (contains python dictionary with details for mail/database config)
+-- database.db (sqlite, 1 entry)
+-- profile
+-- requirements.txt
+-- frontend
| +-- build
| +-- node_module
| +-- public
| +-- src
| +-- package.json
| +-- package-lock.json
| +-- webpack.config.js
requirements.txt was generated via pip freeze prior to deployment. frontend/build was generated via npm run build, again just prior to deployment.
I've tried various approaches and tweaks to both server and client with no success. I've seen approaches that involve moving build and/or package.json to the root directory, but this did not work. I feel like I'm missing something fairly simple but I cannot for the life of me figure it out.
Select code excerpts include:
app.py
app = Flask(__name__, static_folder='/frontend/build', static_url_path="/frontend/public/index.html")
#app.route('/test')
#cross_origin()
def test():
return 'success' #*applicationName*.herokuapp.com/test works correctly
#app.route('/')
#cross_origin()
def serve():
return send_from_directory(app.static_folder, 'index.html')
if __name__ == ("__main__"):
app.run(host='0.0.0.0')
procfile
web: gunicorn app:app --log-file=-
frontend/package.json
{
"name": "frontend",
"version": "0.1.0",
"private": true,
"proxy": "https://*applicationName*.herokuapp.com/",
.env
FLASK_APP=app.py
FLASK_ENV=production
Related
root/
├─ frontend/
├─ backend/
.gitignore
.git/
I have a root folder and inside of that root i have two folders one is frontend and another one is backend. and i have git initialize on the root folder. Now I want to deploy frontend on firebase and backend on the heroku. The problem is that to deploy backend on herko 'coz it'll ask me to git initialize. how to handle this situation.
Hello I have a project that I inherited from others a while back. The project was originally built using AngularJS. I am currently attempting to prepare for migrating the project to Angular2. I was attempting to switch my controllers over to components. When I attempt to change
angular.module('main').controller...
to
angular.module('main').component...
it complains that the property 'component' does not exist on type 'IModule'
From my googling I suspect that it is tied to typings (the project is currently utilizing typescript) but I am having trouble figuring out how to update this. when I run npm list on the project the first few lines return
#types/angular#1.7.3
+-- #types/angular-ui-bootstrap#0.13.47
| `-- #types/angular#1.7.3 deduped
+-- angular#1.7.8
+-- angular-route#1.7.8
further down I have
| `-- typescript#1.6.2
obviously I did not originally set up this project so im learning how this whole set up works. Please let me know what further information is needed.
I am setting up a project has a structure like this:
components folder contains lots of React components.
examples folder is a folder uses components as a local dependency
So, right now, the package.json of examples is like this:
{
...
"devDependencies": {
"components": "../components"
...
}
}
And I want the example to recompile when I change the code in components.
Any idea how can I do this?
EDIT
My project structure is like this, and I am using webpack.
.
+-- components
| +-- index.js
| +-- package.json
+-- examples
+-- index.js
+-- package.json
If your project is using webpack then hot relaoding will work :https://webpack.js.org/concepts/hot-module-replacement/
Otherwise if it is just node you can use a server like nodemon eg:
$ npm install nodemon -g
$ nodemon app.js
This automatically pick up changes.
I cannot send an HTTP request to backend container when I'm running app on AWS server production. However, when I'm running app locally I can make requests to backend just fine. For making requests I use fetch:
fetch('http://localhost:8000/something')
Here is how project structure looks like:
.
├── docker-compose.yml
|
├── backend
│ ├── Dockerfile
│ └── server.js
|
└── frontend
├── Dockerfile
├── package.json
├── public
│ └── index.html
└── src
├── components
├── data
├── index.js
├── routes.js
├── static
├── tests
└── views
docker-compose.yml:
version: '3'
services:
frontend:
build:
context: .
dockerfile: frontend/Dockerfile
volumes:
- ./frontend:/frontend
ports:
- "80:5000"
links:
- backend
backend:
build:
context: .
dockerfile: backend/Dockerfile
volumes:
- ./backend:/backend
ports:
- "8000:8000"
Dockerfile in frontend:
FROM node:latest
RUN mkdir -p /frontend
WORKDIR /frontend
ADD . /frontend
VOLUME ["/frontend"]
EXPOSE 5000
CMD yarn && yarn build && yarn global add serve && serve -s build
Dockerfile in backend:
FROM node:latest
RUN mkdir -p /backend
WORKDIR /backend
ADD . /backend
VOLUME ["/backend"]
EXPOSE 8000
CMD yarn && yarn start
Can someone explain me what is wrong with my config? I'm very confused, because it works without any issues locally.
TLDR: Need to change the frontend code to call the current host instead of 'localhost'
The problem is your app is saying 'hey localhost' instead of 'hey VPS ip', when visiting from YOUR browser. You need to edit your frontend code to visit the current host you're visiting. That's why you're receiving a request on YOUR localhost server.
Instead of fetch("http:///localhost:8000/something") change it to fetch("http://"+location.host+":8000") (There are better ways, this gets it done).
Also note docker containers are a little different in terms of networking as well. A docker container doesn't really have a concept of 'localhost' the same way non docker container apps do. You have to use the VPS's IP/Local IP when making the call from server to server. A trick I use is to use docker's default docker0 bridge 172.17.0.1.
I tend to use networks over 'links' and actually cant comment fully on it, but when containers were on the same docker network, you could access the other container by using the container's name. This only works for server side code however, ie: node.js server -> node.js server/mongo db. Example mongodb connection would be mongodb://mongo_server:27017/mydatabase and mongo_server would resolve to the container's IP.
Another thing you'll possibly encounter when attempting to use the IP is your firewall, you would have to allow that particular ip/port in through your firewall as well.
I have a Yeoman full-stack app #2.0.13 with the exact same directory structure as in this tutorial.
Everything works fine - grunt serve:dist etc works without any errors. Now I want to go in production and deploy the app on a apache server as example.com/xxx using mod_proxy. I copy the grunt build generated /dist directory to a home directory and start up the server app :
NODE_ENV=production node server/app.js
The app starts up, populating users and so on. Everything works well. Now I setup virtual host settings for the node app :
<Location /html/xxx/>
ProxyPass http://127.0.0.1:9000/
ProxyPassReverse http://127.0.0.1:9000/
</Location>
This sort of works. The weird thing is, that index.html from the dist directory is loaded correct
dist
├── public
│ ├── app
│ ├── assets
│ ├── bower_components
│ └─ index.html <---
|
└── server
├── api
├── auth
├── components
├── config
├── views
├─ app.js
└─ router.js
The proxyPass works, index.html is loaded - but the files index.html is referring to (the 4 assembled public/app files/ vendor.js, app.js and so on) is not. I get a 404 no matter what I have tried, no matter what setup from any guide I have tested
Have really spent many hours on this. To me it seems that the reverse proxy somehow alters the internal urls? The setup works if i replace dist/ with a node script that just listens on port 9000 and returns hello world.
What am I missing? Is there another way to do this?
So I was looking all over the place for quite some time as well but finally found this link:
Apache and Yeoman built NodeJS app on the Same Server... Virtual Hosts?
Which brought me on the right track, so I think what should fix it is to change the base tag in your header section of the index.hml file to:
<base href="/xxx/">
And they should be accessible, at least for me.
My vhost settings look like this:
<Proxy /xxx/*>
Order deny,allow
Allow from all
</Proxy>
ProxyPass /xxx http://127.0.0.1:6969/
ProxyPassReverse /xxx http://127.0.0.1:6969/
For anyone else having this problem I ended up using port 80. This works. In a node environment you not need or use a /www or apache / nginx anyway. So get rid of this and use
// Server port
port: process.env.OPENSHIFT_NODEJS_PORT ||
process.env.PORT ||
80,
in the yeoman production.js file. Thats it. An entire application can be served from within a user home/ directory without installing any kind of server http software. Just upload dist to anywhere on your server and use forever to run :
sudo NODE_ENV=production forever start server/app.js