I have a project that uses a fastapi backend and a vite.js React front end. I currently have each dockerized as image.
Is there a way I can get fastapi to serve the vite projects static files so I don't have to have two images.
I have seen this https://fastapi.tiangolo.com/tutorial/static-files/ but not sure if that's what I need or would it be like
from fastapi.staticfiles import StaticFiles
load_dotenv() # take environment variables from .env.
app = FastAPI()
# serves static files from the /frontend/dist directory
app.mount("/frontend", StaticFiles(directory="frontend/dist"), name="frontend")
I have ran the above and got error RuntimeError: Directory 'frontend/dist' does not exist
I have my vite project in frontend I ran the vite build command which produced a dist folder with my static files inside. This is my current structure
.
├── README.md
├── backend
│ ├── Dockerfile
│ ├── Pipfile
│ ├── Pipfile.lock
│ └── main.py
└── frontend
├── Dockerfile
├── dist
│ ├── assets
│ │ ├── favicon.17e50649.svg
│ │ ├── index.1cd49a68.js
│ │ └── index.30ea237b.css
│ └── index.html
├── index.html
├── package-lock.json
├── package.json
├── postcss.config.js
├── src
│ ├── App.jsx
│ ├── favicon.svg
│ ├── index.css
│ ├── logo.svg
│ └── main.jsx
├── tailwind.config.js
└── vite.config.js
After MadsLindh suggested the change, I was able to start the fastapi server from within the backend directory but now I get
> uvicorn main:app --reload
INFO: Will watch for changes in these directories: ['/Users/paul/Desktop/deal_query_ui/backend']
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started reloader process [79530] using WatchFiles
INFO: Started server process [79532]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: 127.0.0.1:63011 - "GET / HTTP/1.1" 404 Not Found
INFO: 127.0.0.1:63012 - "GET /apple-touch-icon-precomposed.png HTTP/1.1" 404 Not Found
INFO: 127.0.0.1:63012 - "GET /apple-touch-icon.png HTTP/1.1" 404 Not Found
INFO: 127.0.0.1:63013 - "GET /favicon.ico HTTP/1.1" 404 Not Found
I even tried to go to http://127.0.0.1:8000/frontend/ but nothing I get the same {"detail":"Not Found"}
Docker
I went on to then create a Dockerfile within the root
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.8-slim
EXPOSE 80
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY ./backend/Pipfile ./backend/Pipfile.lock ./
RUN python -m pip install --upgrade pip
RUN pip install pipenv && pipenv install --dev --system --deploy
WORKDIR /app
COPY . /app
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["uvicorn", "backend.main:app", "--host", "0.0.0.0", "--port", "80"]
When running this locally I get the error
Traceback (most recent call last):
File "/usr/local/bin/uvicorn", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/uvicorn/main.py", line 407, in main
run(
File "/usr/local/lib/python3.8/site-packages/uvicorn/main.py", line 575, in run
server.run()
File "/usr/local/lib/python3.8/site-packages/uvicorn/server.py", line 60, in run
return asyncio.run(self.serve(sockets=sockets))
File "/usr/local/lib/python3.8/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "uvloop/loop.pyx", line 1501, in uvloop.loop.Loop.run_until_complete
File "/usr/local/lib/python3.8/site-packages/uvicorn/server.py", line 67, in serve
config.load()
File "/usr/local/lib/python3.8/site-packages/uvicorn/config.py", line 479, in load
self.loaded_app = import_from_string(self.app)
File "/usr/local/lib/python3.8/site-packages/uvicorn/importer.py", line 21, in import_from_string
module = importlib.import_module(module_str)
File "/usr/local/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 843, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/app/./backend/main.py", line 14, in <module>
app.mount("/frontend", StaticFiles(directory="../frontend/dist"), name="frontend")
File "/usr/local/lib/python3.8/site-packages/starlette/staticfiles.py", line 55, in __init__
raise RuntimeError(f"Directory '{directory}' does not exist")
RuntimeError: Directory '../frontend/dist' does not exist
Which from my experience doesn't make sense to me as I could get the server up and running from within the BE directory.
Related
The "Versions" page in the AppEngine section of the GCP console here displays a table containing all of the git commit SHA-1 hashes that have been deployed for a given AppEngine Service.
How would I display this list using the gcloud CLI?
You are able to generate the table you're looking for using the app group within the gcloud CLI.
Here is an example table with some formatting and asc. sorting:
gcloud app versions list \
--format="table[box](last_deployed_time.datetime:label=DEPLOYED, version.id:label=GIT_COMMIT_HASH)" \
--service=$GAE_SERVICE_NAME \
--sort-by=DEPLOYED
#=>
┌───────────────────────────┬──────────────────────────────────────────┐
│ DEPLOYED │ GIT_COMMIT_HASH │
├───────────────────────────┼──────────────────────────────────────────┤
│ 1970-01-01 00:00:00-00:00 │ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx │
│ . . . │ . . . │
│ . . . │ . . . │
│ . . . │ . . . │
│ 1970-01-01 00:00:01-00:00 │ yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy │
└───────────────────────────┴──────────────────────────────────────────┘
I'm having trouble setting up my yaml files for google app engine. The configuration works correctly when my app.yaml file is in the root of the project but if it is within a subdirectory it does not build the correct source. I suspect I need to set the dir: option in the build config, but I have tried multiple variations and I can't get it to work.
Working file structure, deployed app is ~3mb in size.
src
deployment
└── staging
└── build.yaml
app.staging.yaml
# build.yaml
steps:
- name: node:12
entrypoint: yarn
- name: node:12
entrypoint: yarn
args: ['build']
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy", "app.staging.yaml"]
timeout: "1800s"
Not working file structure, deployed app is ~1kb in size.
src
deployment
└── staging
└── build.yaml
└── app.yaml
# build.yaml
steps:
- name: node:12
entrypoint: yarn
- name: node:12
entrypoint: yarn
args: ['build']
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy", "deployment/staging/app.yaml"]
timeout: "1800s"
In both scenarios I am kicking off the deployment with:
gcloud builds submit --config deployment/staging/build.yaml
What should my dir: be set to in the build.yaml steps so that the build step knows to build from root? Is there any way to debug this locally without having to upload the source every time?
Thanks!
A
you cannot have the app.yaml and the cloudbuild.yaml in the same directory if you are deploying in a non-custom runtime. Please see this comment
I believe by default cloud build expects the app.yaml file to be in the uppermost directory. I'm not sure if it's possible to change this in the cloud build settings.
I'm encountering an issue when trying to run a simple BigQuery ETL pipeline with a flask app on Google App Engine in the flex environment.
It works when I run it locally, which I do by first starting it with flask run or gunicorn -b :$PORT main:app and then going to an endpoint in my browser, doing stuff on the page, and submitting a form. The POST handler for the page then invokes the Apache Beam pipeline. All of that works fine.
But when I deploy it with gcloud app deploy and try to access any endpoint I get a 502 error and the logs show the following:
2018-10-04 14:03:39 default[20181003t232620] Traceback (most recent call last): File "/env/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker worker.init_process() File "/env/local/lib/python2.7/site-packages/gunicorn/workers/base.py", line 129, in init_process self.load_wsgi() File "/env/local/lib/python2.7/site-packages/gunicorn/workers/base.py", line 138, in load_wsgi self.wsgi = self.app.wsgi() File "/env/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 67, in wsgi self.callable = self.load() File "/env/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load return self.load_wsgiapp() File "/env/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 41, in load_wsgiapp return util.import_app(self.app_uri) File "/env/local/lib/python2.7/site-packages/gunicorn/util.py", line 350, in import_app __import__(module) File "/home/vmagent/app/main.py", line 15, in <module> import rw_bigquery_etl File "/home/vmagent/app/rw_bigquery_etl.py", line 9, in <module> import apache_beam as beam File "lib/apache_beam/__init__.py", line 88, in <module> from apache_beam import coders File "lib/apache_beam/coders/__init__.py", line 19, in <module> from apache_beam.coders.coders import * File "lib/apache_beam/coders/coders.py", line 30, in <module> from apache_beam.coders import coder_impl ImportError: lib/apache_beam/coders/coder_impl.so: invalid ELF header
2018-10-04 14:03:39 default[20181003t232620] [2018-10-04 14:03:39 +0000] [8] [INFO] Worker exiting (pid: 8)
2018-10-04 14:03:39 default[20181003t232620] [2018-10-04 14:03:39 +0000] [1] [INFO] Shutting down: Master
2018-10-04 14:03:39 default[20181003t232620] [2018-10-04 14:03:39 +0000] [1] [INFO] Reason: Worker failed to boot.
With the actual error being from apache_beam.coders import coder_impl ImportError: lib/apache_beam/coders/coder_impl.so: invalid ELF header
I had lots of issues with dependencies recently, so I just ran pip freeze > requirements.txt in the project folder, giving me this (pastebin). I've installed this to a lib folder in the project folder and have the line
vendor.add('lib') in appengine_config.py. Also, this is my app.yaml:
runtime: python
api_version: 1
threadsafe: true
env: flex
entrypoint: gunicorn -b :$PORT main:app
runtime_config:
python_version: 2
handlers:
- url: /.*
script: main.app
login: required
How can I resolve this issue, or go about troubleshooting it?
I'm new to Google Cloud and pip, so I'm still trying to understand how the cloud environment works, especially with python packages.
Consolidating python dependencies/requirements for apache beam is uniquely frustrating.
It would be helpful to see your
pipeline config
how you launch your pipeline locally
how you launch your pipeline remotely (your request handler code that launches it)
where your pipeline code sits relative to your project root
But it sounds like that requirements.txt you set up is the requirements.txt for your gae flex instance but not getting used for your dataflow worker. Possibly you supplied your requirements.txt as a commandline option when running locally and your server code is not supplying that same option.
Look at my answer here:
https://stackoverflow.com/a/51312281/4458510
I've had the best luck using a setup.py for my pipeline's dependencies, like they do in this example: https://github.com/apache/beam/tree/master/sdks/python/apache_beam/examples/complete/juliaset
When attempting to pull in the grpcio library along with Cloud Endpoints Framework, it causes an error when running it through dev_appserver.py. When these changes are pushed to Google Cloud Platform App Engine the error does not present itself.
I have tried changing the versions of google-endpoints, grpcio, and six, but none of the combinations resolved the error. I have run into the error on both Windows and Ubuntu.
Error
Traceback (most recent call last):
File "C:\Users\jwesley\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\runtime\wsgi.py", line 240, in Handle
handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
File "C:\Users\jwesley\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\runtime\wsgi.py", line 299, in _LoadHandler
handler, path, err = LoadObject(self._handler)
File "C:\Users\jwesley\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\runtime\wsgi.py", line 85, in LoadObject
obj = __import__(path[0])
File "C:\Users\jwesley\code_store\gcp_python_sample\python-docs-samples\appengine\standard\endpoints-frameworks-v2\echo\main.py", line 19, in <module>
import endpoints
File "C:\Users\jwesley\code_store\gcp_python_sample\python-docs-samples\appengine\standard\endpoints-frameworks-v2\echo\lib\endpoints\__init__.py", line 27, in <module>
from .apiserving import *
File "C:\Users\jwesley\code_store\gcp_python_sample\python-docs-samples\appengine\standard\endpoints-frameworks-v2\echo\lib\endpoints\apiserving.py", line 76, in <module>
from endpoints_management.control import client as control_client
File "C:\Users\jwesley\code_store\gcp_python_sample\python-docs-samples\appengine\standard\endpoints-frameworks-v2\echo\lib\endpoints_management\__init__.py", line 17, in <module>
from . import auth, config, control, gen
File "C:\Users\jwesley\code_store\gcp_python_sample\python-docs-samples\appengine\standard\endpoints-frameworks-v2\echo\lib\endpoints_management\control\__init__.py", line 19, in <module>
from ..gen import servicecontrol_v1_messages as sc_messages
File "C:\Users\jwesley\code_store\gcp_python_sample\python-docs-samples\appengine\standard\endpoints-frameworks-v2\echo\lib\endpoints_management\gen\servicecontrol_v1_messages.py", line 23, in <module>
from apitools.base.py import encoding
File "C:\Users\jwesley\code_store\gcp_python_sample\python-docs-samples\appengine\standard\endpoints-frameworks-v2\echo\lib\apitools\base\py\__init__.py", line 21, in <module>
from apitools.base.py.base_api import *
File "C:\Users\jwesley\code_store\gcp_python_sample\python-docs-samples\appengine\standard\endpoints-frameworks-v2\echo\lib\apitools\base\py\base_api.py", line 27, in <module>
from six.moves import http_client
ImportError: No module named moves
Recreating the issue
Make sure you have Google Cloud SDK installed.
Clone the python-docs-samples repository:
https://github.com/GoogleCloudPlatform/python-docs-samples.git
Navigate into echo sample
cd python-docs-samples/appengine/standard/endpoints-frameworks-v2/echo
Start dev_appserver
dev_appserver.py app.yaml
Send a POST test to make sure it works.
curl -d '{"content":"time"}' -H "Content-Type: application/json" -X POST http://localhost:8080/_ah/api/echo/v1/echo
With the echo application working, now install grpcio with the version GCP uses.
pip install grpcio==1.0.0
Edit the app.yaml file to include the grpcio library, so your file will look like this.
runtime: python27
threadsafe: true
api_version: 1
basic_scaling:
max_instances: 2
#[START_EXCLUDE]
skip_files:
- ^(.*/)?#.*#$
- ^(.*/)?.*~$
- ^(.*/)?.*\.py[co]$
- ^(.*/)?.*/RCS/.*$
- ^(.*/)?\..*$
- ^(.*/)?setuptools/script \(dev\).tmpl$
#[END_EXCLUDE]
handlers:
# The endpoints handler must be mapped to /_ah/api.
- url: /_ah/api/.*
script: main.api
libraries:
- name: pycrypto
version: 2.6
- name: ssl
version: 2.7.11
- name: grpcio
version: "latest"
# [START env_vars]
env_variables:
# The following values are to be replaced by information from the output of
# 'gcloud endpoints services deploy swagger.json' command.
ENDPOINTS_SERVICE_NAME: YOUR-PROJECT-ID.appspot.com
ENDPOINTS_SERVICE_VERSION: 2016-08-01r0
# [END env_vars]
Start dev_appserver.py again.
dev_appserver.py app.yaml
Send the POST again.
curl -d '{"content":"time"}' -H "Content-Type: application/json" -X POST http://localhost:8080/_ah/api/echo/v1/echo
The error I placed above should show up in the output of the dev_appserver.py.
I learned how to use Container Registry trigger for Google Cloud Functions deploy from the following tutorial.
Automatic serverless deployments with Cloud Source Repositories and Container Builder
I have Google App engine flexible app. The runtime is Node.js. I want to deploy the app triggered by git push. Are there any good references?
I'm using these example code. Manual deployment works normally.
* tree
.
├── app.js
├── app.yaml
└── package.json
* app.js
'use strict';
const express = require('express');
const app = express();
app.get('/', (req, res) => {
res.status(200).send('Hello, world!').end();
});
const PORT = process.env.PORT || 8080;
app.listen(PORT, () => {
console.log(`App listening on port ${PORT}`);
console.log('Press Ctrl+C to quit.');
});
* app.yaml
runtime: nodejs
env: flex
* package.json
{
"name": "appengine-hello-world",
"description": "Simple Hello World Node.js sample for Google App Engine Flexible Environment.",
"version": "0.0.1",
"private": true,
"license": "Apache-2.0",
"author": "Google Inc.",
"repository": {
"type": "git",
"url": "https://github.com/GoogleCloudPlatform/nodejs-docs-samples.git"
},
"engines": {
"node": ">=4.3.2"
},
"scripts": {
"deploy": "gcloud app deploy",
"start": "node app.js",
"lint": "samples lint",
"pretest": "npm run lint",
"system-test": "samples test app",
"test": "npm run system-test",
"e2e-test": "samples test deploy"
},
"dependencies": {
"express": "4.15.4"
},
"devDependencies": {
"#google-cloud/nodejs-repo-tools": "1.4.17"
},
"cloud-repo-tools": {
"test": {
"app": {
"msg": "Hello, world!"
}
},
"requiresKeyFile": true,
"requiresProjectId": true
}
}
* deploy command
$ gcloud app deploy
Update 1
I found a similar question.
How to auto deploy google app engine flexible using Container Registry with Build Trigger
I added cloudbuild.yaml.
steps:
# Build the Docker image.
- name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/$PROJECT_ID/app', '.']
# Push it to GCR.
- name: gcr.io/cloud-builders/docker
args: ['push', 'gcr.io/$PROJECT_ID/app']
# Deploy your Flex app from the image in GCR.
- name: gcr.io/cloud-builders/gcloud
args: ['app', 'deploy', 'app.yaml', '--image-url=gcr.io/$PROJECT_ID/app']
# Note that this build pushes this image.
images: ['gcr.io/$PROJECT_ID/app']
However, I got an error. The error message is "error loading template: yaml: line 5: did not find expected key". I'm looking into it.
Update 2
The reason was invalid yaml format. I changed it like the following.
steps:
# Build the Docker image.
- name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/$PROJECT_ID/app', '.']
# Push it to GCR.
- name: gcr.io/cloud-builders/docker
args: ['push', 'gcr.io/$PROJECT_ID/app']
# Deploy your Flex app from the image in GCR.
- name: gcr.io/cloud-builders/gcloud
args: ['app', 'deploy', 'app.yaml', '--image-url=gcr.io/$PROJECT_ID/app']
# Note that this build pushes this image.
images: ['gcr.io/$PROJECT_ID/app']
I got another error. The message is "error loading template: unknown field "images" in cloudbuild_go_proto.BuildStep"
Update 3
I noticed that "images" indent was wrong.
steps:
...
# Note that this build pushes this image.
images: ['gcr.io/$PROJECT_ID/app']
I encountered new error.
starting build "e3e00749-9c70-4ac7-a322-d096625b695a"
FETCHSOURCE
Initialized empty Git repository in /workspace/.git/
From https://source.developers.google.com/p/xxxx/r/bitbucket-zono-api-btc
* branch 0da6c8bf209c72b6406f3801f3eb66d346187f4e -> FETCH_HEAD
HEAD is now at 0da6c8b fix invalid yaml
BUILD
Starting Step #0
Step #0: Already have image (with digest): gcr.io/cloud-builders/docker
Step #0: unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/Dockerfile: no such file or directory
Finished Step #0
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: exit status 1
Yes. I don't have Dockerfile because I use Google App Engine flexible Environment Node.js runtime. It is not necessary Docker.
Update 4
I added Dockerfile
FROM gcr.io/google-appengine/nodejs
Then new error was occurred.
Step #2: ERROR: (gcloud.app.deploy) User [xxxxxxx#cloudbuild.gserviceaccount.com] does not have permission to access app [xxxx] (or it may not exist): App Engine Admin API has not been used in project xxx before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/appengine.googleapis.com/overview?project=xxx then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.
Update 5
I enabled App Engine Admin API then next error has come.
Step #2: Do you want to continue (Y/n)?
Step #2: WARNING: Unable to verify that the Appengine Flexible API is enabled for project [xxx]. You may not have permission to list enabled services on this project. If it is not enabled, this may cause problems in running your deployment. Please ask the project owner to ensure that the Appengine Flexible API has been enabled and that this account has permission to list enabled APIs.
Step #2: Beginning deployment of service [default]...
Step #2: WARNING: Deployment of service [default] will ignore the skip_files field in the configuration file, because the image has already been built.
Step #2: Updating service [default] (this may take several minutes)...
Step #2: ...............................................................................................................................failed.
Step #2: ERROR: (gcloud.app.deploy) Error Response: [9]
Step #2: Application startup error:
Step #2: npm ERR! path /app/package.json
Step #2: npm ERR! code ENOENT
Step #2: npm ERR! errno -2
Step #2: npm ERR! syscall open
Step #2: npm ERR! enoent ENOENT: no such file or directory, open '/app/package.json'
Step #2: npm ERR! enoent This is related to npm not being able to find a file.
I changed my code tree but it did not work. I confirmed that Appengine Flexible API has been enabled. I have no idea what should I try next.
.
├── Dockerfile
├── app
│ ├── app.js
│ └── package.json
├── app.yaml
└── cloudbuild.yaml
Update 6
When I deploy manually, the artifact is like the following.
us.gcr.io/xxxxx/appengine/default.20180316t000144
Should I use this artifact...? I'm confused..
Update 7
Two builds are executed. I don't know whether this is correct.
Your Dockerfile doesn't copy source to the image.
You can move everything back to the same directory such that
.
├── app.js
├── app.yaml
├── cloudbuild.yaml
├── Dockerfile
└── package.json
but it doesn't matter.
Paste this into your Dockerfile and it should work:
FROM gcr.io/google-appengine/nodejs
# Working directory is where files are stored, npm is installed, and the application is launched
WORKDIR /app
# Copy application to the /app directory.
# Add only the package.json before running 'npm install' so 'npm install' is not run if there are only code changes, no package changes
COPY package.json /app/package.json
RUN npm install
COPY . /app
# Expose port so when container is launched you can curl/see it.
EXPOSE 8080
# The command to execute when Docker image launches.
CMD ["npm", "start"]
Edit: This is the cloudbuild.yaml I used:
steps:
- name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/$PROJECT_ID/app', '.']
- name: gcr.io/cloud-builders/docker
args: ['push', 'gcr.io/$PROJECT_ID/app']
- name: gcr.io/cloud-builders/gcloud
args: ['app', 'deploy', 'app.yaml', '--image-url=gcr.io/$PROJECT_ID/app']
images: ['gcr.io/$PROJECT_ID/app']
A tech guy helped me. I changed directory structure and cloudbuild.yaml. Then it worked. Thanks.
* Code Tree
.
├── app
│ ├── app.js
│ ├── app.yaml
│ └── package.json
└── cloudbuild.yaml
* cloudbuild.yaml
steps:
- name: gcr.io/cloud-builders/npm
args: ['install', 'app']
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy', 'app/app.yaml']