Project is not pushed in Heroku - reactjs

So I want to push all the new changes of my project in Heroku. However, all the changes are pushed straight away to my own GitHub repository instead of Heroku.
I am getting this
git push heroku production:master
Everything up-to-date
With remote -v
heroku git#github.com:theo82/expensify-react-app.git (fetch)
heroku git#github.com:theo82/expensify-react-app.git (push)
origin git#github.com:theo82/expensify-react-app.git (fetch)
origin git#github.com:theo82/expensify-react-app.git (push)
my realeases in Heroku's git are
v10 Set FIREBASE_API_KEY, FIREBASE_AUTH_DOMAIN, FIREBASE_DATABASE_URL, FIREBASE_PROJECT_ID, FIREBA… email#gmail.com 2020/06/04 08:56:20 +0300 (~ 23m ago)
v9 Remove KEY config vars email#gmail.com 2020/06/04 08:52:11 +0300 (~ 27m ago)
v8 Set KEY config vars email#gmail.com 2020/06/04 08:45:03 +0300 (~ 34m ago)
v7 Deploy 3284670f email#gmail.com 2020/05/28 08:01:17 +0300
v6 Deploy 8a0c21d0 email#gmail.com 2020/05/27 19:36:41 +0300
v5 Deploy b128207e email#gmail.com 2020/05/27 18:01:10 +0300
v4 Deploy 21c22fda email#gmail.com 2020/05/25 19:26:36 +0300
v3 Deploy 36ba978f email#gmail.com 2020/05/22 09:39:14 +0300
v2 Enable Logplex email#gmail.com 2020/05/22 08:58:26 +0300
v1 Initial release email#gmail.com 2020/05/22 08:58:26 +0300
Surely, I messed up something but I can't see where. I tried the solution posted here, but still have the same problem. How can I fix it?
Thanks,
Theo.

Your heroku git remote is wrongly configured.
$ git remote -v
heroku git#github.com:theo82/expensify-react-app.git (fetch)
heroku git#github.com:theo82/expensify-react-app.git (push)
origin git#github.com:theo82/expensify-react-app.git (fetch)
origin git#github.com:theo82/expensify-react-app.git (push)
They are all pointing to your GitHub. You need to fix the Heroku remote. First remove the Heroku remote
$ git remote rm heroku
$ git remote -v
origin git#github.com:theo82/expensify-react-app.git (fetch)
origin git#github.com:theo82/expensify-react-app.git (push)
Determine the link of your Heroku remote.
add the Heroku remote.
$ git remote add heroku https://git.heroku.com/yourappname.git
$ git remote -v
heroku https://git.heroku.com/yourappname.git (fetch)
heroku https://git.heroku.com/yourappname.git (push)
origin git#github.com:theo82/expensify-react-app.git (fetch)
origin git#github.com:theo82/expensify-react-app.git (push)
Push with git push heroku master

Related

Deploy React Frontend, FastApi Backend with Docker-Compose on Heroku

I want to deploy a worker (FastApi) and a web (react) container to Heroku using docker-compose. Running locally, everything works. But on Heroku, the frontend cannot reach the backend.
Dockerfile.worker
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
WORKDIR /app
COPY app .
COPY requirements.txt ./requirements.txt
ENV IS_IN_DOCKER=true
RUN pip3 install -r requirements.txt
RUN pip3 install -r ./custom_model/custom_requirements.txt
CMD uvicorn main:app --host 0.0.0.0 --port 8800
Dockerfile.web
# pull official base image
FROM node:13.12.0-alpine
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm install --silent
RUN npm install react-scripts#3.4.1 -g --silent
# add app
COPY . ./
# start app
CMD ["npm", "start"]
docker-compose.yml (using the Port Env variable from heroku for the frontend, locally i have a .env File with PORT=80)
version: '3.7'
services:
ml-starter-frontend:
depends_on:
- ml-starter-backend
container_name: ml-starter-frontend
build:
context: ./frontend
dockerfile: Dockerfile.web
ports:
- '${PORT}:3000'
restart: always
ml-starter-backend:
container_name: ml-starter-backend
build:
context: ./backend
dockerfile: Dockerfile.worker
restart: always
setupProxy.js in React src
const { createProxyMiddleware } = require('http-proxy-middleware');
module.exports = function(app) {
app.use(
'/api/*',
createProxyMiddleware({
target: 'http://ml-starter-backend:8800',
changeOrigin: true,
})
);
};
Call in React frontend to backend
export const getConfiguration = () => async (dispatch) =>{
const {data} = await axios.get('/api/configs')
dispatch({type: GET_CONFIGURATION, payload: data })
}
Running locally the call works:
Locally
Deployment to heroku:
heroku container:push --recursive --app {appName}
heroku container:release worker --app {appName}
heroku container:release web --app {appName}
Frontend cannot reach backend:
Heroku
Heroku Worker (FastAPI) log
2021-04-09T10:10:55.322684+00:00 app[worker.1]: INFO: Application startup complete.
2021-04-09T10:10:55.325926+00:00 app[worker.1]: INFO: Uvicorn running on http://0.0.0.0:8800 (Press CTRL+C to quit)
Heroku Web (React) Log
2021-04-09T10:11:21.572639+00:00 app[web.1]: [HPM] Proxy created: / -> http://ml-starter-backend:8800
.....
2021-04-09T10:25:37.404622+00:00 app[web.1]: [HPM] Error occurred while trying to proxy request /api/configs from {appName}.herokuapp.com to http://ml-starter-backend:8800 (ENOTFOUND) (https://nodejs.org/api/errors.html#errors_common_system_errors)
If possible i would like to avoid nginx as I already spent 2 hours trying to make it work with proxies in the prod build. Does anybody have experience with heroku and docker-compose and could give me a hint how to fix this?
For everybody with a similiar problem.. Actually i ended up with a workaround.. I made a dockerfile containing the backend and frontend.. and deployed it to Heroku as a web app..
During my search i discovered, that Heroku workers are not really suitable in a HTTP scenario. If I understood it right, they are meant for reading messages from a queue.. https://devcenter.heroku.com/articles/background-jobs-queueing
Yup, I did something like what Loki34 did too. My approach was to run npm run build, then have FastAPI serve these build files. And like what Loki34 says, this is a workaround approach, meaning it's probably not the best approach out there when it comes to deploying frontend and backend in Docker containers. Anyway, those who are curious about how I did it can check out this app that I did: https://github.com/yxlee245/app-iris-ml-react-fastapi

How to deploy react app in Elastic beanstalk in a docker platform using Travis CI?

Environment health has transitioned from Ok to Severe. ELB processes are not healthy on all instances. ELB health is failing or not available for all instances.
I am deploying a react app in AWS using the docker platform. I am getting HEALTH-Severe issues when I deploy my app. I have also added custom TCP inbound rules in the EC2 instance (source-anywhere).
I am using free tier in AWS. The following is my Dockerfile.
FROM node:alpine as builder
WORKDIR '/app'
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx
EXPOSE 80
COPY --from=builder /app/build /usr/share/nginx/html
My .travis.yml file:
language: generic
sudo: required
services:
- docker
before_install:
- docker build -t username/docker-react -f Dockerfile.dev .
script:
- docker run -e CI=true username/docker-react npm run test
deploy:
provider: elasticbeanstalk
region: us-east-2
app: "docker-react"
env: "DockerReact-env"
bucket_name: "my bucket-name"
bucket_path: "docker-react"
on:
branch: master
access_key_id: $AWS_ACCESS_KEY
secret_access_key: $AWS_SECRET_KEY
When I open my app I am getting 502 Bad Gateway error.
I had the same problem. After reading some of the documentation here I figured maybe docker-compose.yml is actually picked up first before anything. Deleting my docker-compose.yml (which I was only using locally) solved the issue for me.

React + Flask Docker-Compose Containers Unable To Communicate Locally, API Calls Returning net::ERR_EMPTY_RESPONSE In Browser

I have 2 docker containers, the front being a React.js app running on ports 3000:3000 and the back being a Flask API running on 5000:5000.
ISSUE:
I am having an issue with these containers wrapped together in docker-compose where the front will be accessible via localhost:3000 as it would normally run outside a container, however it is unable to communicate with the back container. I receive a net::ERR_EMPTY_RESPONSE in browser when attempting to use any API component. How might I be able to resolve this?
SETUP:
My directory for this docker-compose setup is as follows:
/project root
- docker-compose.yml
/react front
- Dockerfile
/app
/flask back
- Dockerfile
/api
My docker-compose.yml is as follows:
version: "3.8"
services:
flask back:
build: ./flask back
command: python main.py run -h 0.0.0.0
volumes:
- ./flask back/:/usr/src/app/
ports:
- 5000:5000
env_file:
- ./flask back/.env
react front:
build: ./react front
volumes:
- ./react front/app:/usr/src/app
- usr/src/app/node_modules
ports:
- 3000:3000
links:
- flask back
The front Dockerfile:
# pull official base image
FROM node:13.12.0-alpine
# set working directory
WORKDIR /usr/src/app
# add `/react front/node_modules/.bin` to $PATH
ENV PATH /react front/node_modules/.bin:$PATH
# install app dependencies
ADD package.json /usr/src/app/package.json
RUN npm install
RUN npm install react-scripts#3.4.1 -g --silent
# start app
CMD ["npm", "start"]
The back Dockerfile:
FROM python:alpine3.7
WORKDIR /usr/src/app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN pip install --upgrade pip
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
COPY . /usr/src/app/
TROUBLESHOOTING SO FAR:
So far I have consulted the following threads on SO:
Flask and React Docker containers not communicating via Docker-Compose - where the package.json needed a proxy addition.
ERR_EMPTY_RESPONSE from docker container - where IP addresses needed rewritten to 0.0.0.0 (this appears to be a unique issue to GO as I never used this form of port and IP configuration in my project)
Neither of these very similar issues have resolved my issue. I am also able to ping the back-end container with the front-end and vice versa. Running the React container while running the Flask API outside of its container also works as expected/intended. If there is any other information anyone would like, I would be happy to provide.
Thank you for the time and patience.

Problem during deploying Metabase with Google App Engine

I am following this instructions for deploying metabase with Google App Engine, after I complete the operations and open the url where the service is deployed i get 502 Bad Gateway or
Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds. and from console I got
INFO metabase.driver :: Registered abstract driver :sql ?
This is my app.yaml
env: flex
manual_scaling:
instances: 1
env_variables:
MB_JETTY_PORT: 8080
MB_DB_TYPE: postgres
MB_DB_DBNAME: metabase
MB_DB_PORT: 5432
MB_DB_USER: devops
MB_DB_PASS: password
MB_DB_HOST: 127.0.0.1
beta_settings:
cloud_sql_instances: <instance-name>=tcp:5432
Dockerfile:
FROM gcr.io/google-appengine/openjdk
EXPOSE 8080
ENV PORT 8080
ENV MB_PORT 8080
ENV MB_JETTY_PORT 8080
ENV MB_DB_PORT 5432
ENV METABASE_SQL_INSTANCE <instance_name>=tcp:5432
ENV JAVA_OPTS "-XX:+IgnoreUnrecognizedVMOptions -Dfile.encoding=UTF-8 --add-opens=java.base/java.net=ALL-UNNAMED --add-modules=java.xml.bind"
ADD https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 ./cloud_sql_proxy
ADD http://downloads.metabase.com/v0.33.2/metabase.jar /metabase.jar
RUN chmod +x ./cloud_sql_proxy
CMD ./cloud_sql_proxy -instances=$METABASE_SQL_INSTANCE=tcp:$MB_DB_PORT & java -jar ./metabase.jar
Also I troubleshoot everything I saw on stackoverflow and tried all options with similar problem but still not working, i tried this option 1 and this options 2 but still no working effects.
My steps:
On GCP I am the owner of the project,I created Compute engine VM instance, then SQL Postgres instance, and a new Postgres database with user, I added the public IP address of the VM in the configurations of the SQL Instance as authorized network, and deployed the app.yaml and Dockerfile with gcloud app deploy. Any working solutons?
[1]: https://www.cloudbooklet.com/install-metabase-on-google-cloud-with-docker-app-engine/
I fixed the issue. I just change the metabase version, it always has to be the newest. 0.36.6 at this moment

Permission error on Gitlab CI deployment with google app engine

I try to make a full process of integrated deployment with Gitlab and Google AppEngine.
But the deployment fail because it seems I miss rights. I really don't know which right is needed here.
I activated the App Engine API and the Cloud Build API.
The rights my service has:
my gitlab-ci.yml file:
Staging Deployment:
stage: Deploy
environment:
name: Staging
before_script:
- echo "deb http://packages.cloud.google.com/apt cloud-sdk-jessie main" | tee /etc/apt/sources.list.d/google-cloud-sdk.list
- curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
- apt-get update
- apt-get -qq -y install google-cloud-sdk
script:
- echo $DEPLOY_KEY_FILE_PRODUCTION > /tmp/$CI_PIPELINE_ID.json
- gcloud auth activate-service-account --key-file /tmp/$CI_PIPELINE_ID.json
- gcloud config list
- gcloud config set project $PROJECT_ID_PRODUCTION
- gcloud --quiet --project $PROJECT_ID_PRODUCTION app deploy gae/test.yml
only:
refs:
- master
And finally my test.yml:
# [START django_app]
runtime: python37
service: testing
entrypoint: gunicorn -b :$PORT manta.wsgi --timeout 120
default_expiration: "5m"
env_variables:
DJANGO_SETTINGS_MODULE: manta.settings.dev
handlers:
# This configures Google App Engine to serve the files in the app's
# static directory.
- url: /static
static_dir: static/
# This handler routes all requests not caught above to the main app.
# It is required when static routes are defined, but can be omitted
# (along with the entire handlers section) when there are no static
# files defined.
- url: /.*
script: auto
# [END django_app]
And the error I have during deployment:
ERROR: (gcloud.app.deploy) Permissions error fetching application [apps/testing]. Please make sure you are using the correct project ID and that you have permission to view applications on the project.

Resources