I am migrating an existing GAE Flex application to GAE Standard, and the app won't start. The gcloud app deploy command succeeds, but the webserver process fails on calls to yarn:
[start] 2021/04/23 16:40:57.592718 No entrypoint specified, using default entrypoint: /serve
[start] 2021/04/23 16:40:57.596207 Starting app
[start] 2021/04/23 16:40:57.596522 Executing: /bin/sh -c exec /serve
[start] 2021/04/23 16:40:57.602799 Waiting for network connection open. Subject:"app/invalid" Address:127.0.0.1:8080
[start] 2021/04/23 16:40:57.603189 Waiting for network connection open. Subject:"app/valid" Address:127.0.0.1:8081
[serve] 2021/04/23 16:40:57.616964 Serve started.
[serve] 2021/04/23 16:40:57.617857 Args: {runtimeLanguage:nodejs runtimeName:nodejs14 memoryMB:256 positional:[]}
[serve] 2021/04/23 16:40:57.620632 Running /bin/sh -c DEBUG=express:*,typeorm:* yarn ts-node:run src/index.ts
sh: 1: yarn: not found
[start] 2021/04/23 16:40:57.628730 Start program failed: failed to detect app after start: ForAppStart(): [aborted, context canceled. subject:"app/valid" Timeout:30m0s, attempts:4 aborted, context canceled. subject:"app/invalid" Timeout:30m0s, attempts:5]
Container called exit(1).
It seems that yarn is picked up and works just fine on the build step, but not in the runtime. Cloudbuild logs contain a bunch of lines similar to this one:
Step #7 - "exporter": Reusing layer 'google.nodejs.yarn:env'
The app.yaml is minimal:
runtime: nodejs14
service: /* redacted */
resources:
cpu: 2
memory_gb: 2
includes:
- env_variables.production.yaml
#[START cloudsql_settings]
beta_settings:
cloud_sql_instances: /* redacted */
#[END cloudsql_settings]
And the package.json looks roughly like this:
{
"engines": {
"node": ">=14"
},
...,
"scripts": {
"start": "DEBUG=express:*,typeorm:* yarn ts-node:run src/index.ts",
"ts-node:run": "ts-node -r tsconfig-paths/register -r dotenv/config"
}
}
What are my possible workarounds here? I'd like to avoid switching to npm because a bunch of package scripts rely on yarn already, and it would take time to ensure that the change won't affect them.
Searching the GCP github for a solution, I came across a similar issue in the ruby-docker image. It could well be a temporary bug on the GCP side.
Related
I moved from manual deployment to automatic CI/CD with my Github repo, In manual deployment, it was working with out any issues. After connecting the main repo and starting the build, it's not getting completed at the Frontend provision.
I'm getting the build timeout error with 30 min default time setting and then increasing it to 120 min in environmental variables override. It's still taking so much time.
On my local machine: it just takes <5 min to build without any errors.Error log of amplify build page
I see from the build log that: after running commands in gulpfile.js, it's getting stuck.
Build Settings file:
version: 1
env:
variables:
VERSION_AMPLIFY: 8.3.0
backend:
phases:
preBuild:
commands:
- npm i -g #aws-amplify/cli#${VERSION_AMPLIFY}
build:
commands:
- '# Execute Amplify CLI with the helper script'
- amplifyPush --simple
frontend:
phases:
preBuild:
commands:
- yarn install
build:
commands:
- yarn run build
- node ./node_modules/gulp/bin/gulp.js
artifacts:
baseDirectory: build
files:
- '**/*'
cache:
paths:
- node_modules/**/*
Console Error Status
My gcloud build will timeout if left at the default timeout of 10 minutes, so I have tried to increase the timeout to 20 minutes.
This is my cloudbuild.yaml.
# cloudbuild.yaml
steps:
- name: node:14.17.1
entrypoint: npm
args: ["install"]
- name: node:14.17.1
entrypoint: npm
args: ["run", "build"]
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
timeout: 1200s
It processes Step 0 and Step 1 and fails at Step 2, which is gcloud app deploy.
The execution log reports the following error:
ERROR: gcloud crashed (InvalidBuildError): Field [timeout] was provided, but should not have been. You may be using an improper Cloud Build pipeline.
All the documentation I've says that this is how you increase the timeout, some say the the timeout needs to be wrapped in single quotes, but this doesn't appear to be true as if I review the Execution details, it correctly identifies that the timeout is 20 minutes. Trying with single quotes makes no difference to the outcome.
I've also tried setting a timeout at the app deploy step as well, but it produces the same error and would be ineffectual anyway as it is the entire build process that is exceeding the execution time if left at default.
The timeout setting can be used if the App Engine environment is "Standard" and not "Flexible".
The environment is set by the "env" setting in app.yaml. By default if this value is not provided, the App Engine environment will be set to Standard. Simply ensure that env: flex
removed from app.yaml.
It is unclear if this is by design or a bug.
I am facing an issue while building my React project using GitHub as a repository, Travis as CI with AWS ElasticBeanStalk as a service to run my app using Docker. I am able to run my test suite but after that, it is not deploying my app on AWS and also not getting any error in Travis console except below:
Below is my Travis .yml file configuration:
language: generic
services:
- docker
before_install:
- docker build -t heet1996/my-profile -f Dockerfile.dev .
script:
- docker run heet1996/my-profile npm run test -- --coverage
deploy:
provider: elasticbeanstalk
region: "us-east-1"
app: "My-profile"
env: "MyProfile-env"
bucket_name: "elasticbeanstalk-us-east-1-413920612934"
bucket_path: "My-profile"
on:
branch: master
access_key_id: $AWS_ACCESS_KEY
secret_access_key: "$AWS_SECRET_KEY"
Let me know if you need more information
A couple things you could try:
Your script command needs to set the environment var CI=true
So
script:
- docker run heet1996/my-profile npm run test -- --coverage
Becomes
script:
- docker run -e CI=true heet1996/my-profile npm run test -- --coverage
Also AWS needs the access variables to be named differently.
Change
access_key_id: $AWS_ACCESS_KEY
secret_access_key: "$AWS_SECRET_KEY"
To
access_key_id: "$AWS_ACCESS_KEY_ID"
secret_access_key: "$AWS_SECRET_ACCESS_KEY"
Using the option --coverage, your test will hang, waiting for input. Hence the message: "...no output has been received in the last 10m0s...".
At a certain point, --coverage was probably able to stop tests (as some used for that purpose), but I guess it was not meant for that and subsequent versions of docker removed that behavior.
Your test must conclude and the conclusion be a success for the deployment by Travis to begin.
Use instead the option --watchAll=false. So you should have:
...
script:
- docker run heet1996/my-profile npm run test -- --watchAll=false
...
That would take care of the obvious issue of your test never concluding (that could be the only issue). Afterward, make sure that your tests are successful. Then, you can worry about other issues such as authentication on AWS, etc...
I've built a docker image which consists of two parts:
simple nodejs app which is listening to port 8080
haskell service which is using snap framework (port 8000)
I know that it's better to run those two parts in different containers, but there is a reason to keep them in one. So I found a way how to run two services in one container with the use of supervisord.
In the dockerfile I expose 8080, and when I run the docker image locally it works just fine. I can make POST requests to nodejs app, which in its turn is making POST request to the haskellmodule using port 8000. I run it with the following command:
docker run -p 8080:8080 image_name
So I pushed the image to google container registry and deployed it with the use of --image-url flag. The deployment process goes without any error, though after that I cannot reach my app. If I look to the running version's logs, I see the following:
A /usr/lib/python2.7/dist-packages/supervisor/options.py:296: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
A 'Supervisord is running as root and it is searching '
A 2017-10-08 14:08:45,368 CRIT Supervisor running as root (no user in config file)
A 2017-10-08 14:08:45,368 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
A 2017-10-08 14:08:45,423 INFO RPC interface 'supervisor' initialized
A 2017-10-08 14:08:45,423 CRIT Server 'unix_http_server' running without any HTTP authentication checking
A 2017-10-08 14:08:45,424 INFO supervisord started with pid 1
A 2017-10-08 14:08:46,425 INFO spawned: 'haskellmodule' with pid 7
A 2017-10-08 14:08:46,427 INFO spawned: 'nodesrv' with pid 8
A 2017-10-08 14:08:47,429 INFO success: haskellmodule entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
A 2017-10-08 14:08:47,429 INFO success: nodesrv entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
A 2017-10-08 14:13:49,124 WARN received SIGTERM indicating exit request
A 2017-10-08 14:13:49,127 INFO waiting for haskellmodule, nodesrv to die
A 2017-10-08 14:13:49,128 INFO stopped: nodesrv (terminated by SIGTERM)
A 2017-10-08 14:13:49,138 INFO stopped: haskellmodule (terminated by SIGTERM)
Then it starts over and everything is repeated over and over again.
My Dockerfile:
FROM node:latest
RUN apt-get update
RUN curl -sSL https://get.haskellstack.org/ | sh
COPY ./nodesrv /nodesrv
COPY ./haskellmodule /haskellmodule
RUN mkdir /log
WORKDIR /haskellmodule
RUN stack build
WORKDIR /
RUN apt-get update && apt-get install -y supervisor
ADD ./configs/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 8080
ENTRYPOINT ["/usr/bin/supervisord"]
My supervisord config:
[supervisord]
nodaemon=true
[program:nodesrv]
command=node index.js
directory=/nodesrv/
user=root
[program:haskellmodule]
command=stack exec haskellmodule-exe
directory=/haskellmodule/
user=root
My app.yaml file I use for deployment:
runtime: custom
env: flex
So seems like google app engine is shutting supervisor down (taking into account that everything is working on localhost). What could be a reason of that?
Thanks in advance
You need to configure your app.yaml file to open ports 8080 and 8000. You need to do this in addition to opening the port in your Dockerfile with EXPOSE. The documentation for how to setup your app.yaml file is located here, and the example from the docs is copied below:
Add the following to your app.yaml:
network:
instance_tag: TAG_NAME
name: NETWORK_NAME
subnetwork_name: SUBNETWORK_NAME
forwarded_ports:
- PORT
- HOST_PORT:CONTAINER_PORT
- PORT/tcp
- HOST_PORT:CONTAINER_PORT/udp
Run supervisord with -n argument.
This will run supervisor in foreground.
Works fine for me in app engine flexible environment.
Thanks
My app.yaml
runtime: custom
vm: true
api_version: 1
health_check:
enable_health_check: False
Dockerfile
# Use the official go docker image built on debian.
FROM golang:1.5.1
# Grab the source code and add it to the workspace.
ADD . /go/
# Install revel and the revel CLI.
RUN go get github.com/revel/revel
RUN go get github.com/revel/cmd/revel
# Use the revel CLI to start up our application.
ENTRYPOINT revel run 4quorum-appengine dev 8080
# Open up the port where the app is running.
EXPOSE 8080
I was working through this article
http://jbeckwith.com/2015/05/08/docker-revel-appengine/
Preview
I am trying to preview it:
gcloud preview app run app.yaml --custom-entrypoint "revel run 4quorum-appengine dev 8080"
WARNING: The `app run` command is deprecated and will soon be removed.
Please use dev_appserver.py (in the same directory as the `gcloud` command) instead.
Module [default] found in file [/Users/802619/Projects/src/4quorum_root/app.yaml]
INFO: Looking for the Dockerfile in /Users/802619/Projects/src/4quorum_root
INFO: Using Dockerfile found in /Users/802619/Projects/src/4quorum_root
INFO 2015-11-06 18:03:44,226 application_configuration.py:399] No version specified. Generated version id: 20151106t180344
INFO 2015-11-06 18:03:44,226 devappserver2.py:763] Skipping SDK update check.
INFO 2015-11-06 18:03:44,266 api_server.py:205] Starting API server at: http://localhost:62780
INFO 2015-11-06 18:03:44,272 dispatcher.py:197] Starting module "default" running at: http://localhost:8080
INFO 2015-11-06 18:03:44,277 admin_server.py:116] Starting admin server at: http://localhost:8000
ERROR 2015-11-06 18:03:44,282 instance.py:280] [Errno 2] No such file or directory
The same thing if trying dev_appserver.py
Deploy
Deploy also doesn't work. Fails because of timeout.
gcloud preview app deploy ./app.yaml
WARNING: Soon, deployments will set the deployed version to receive all traffic by
default.
To keep the current behavior (where new deployments do not receive any traffic),
use the `--no-promote` flag or run the following command:
$ gcloud config set app/promote_by_default false
To adopt the new behavior early, use the `--promote` flag or run the following
command:
$ gcloud config set app/promote_by_default true
Either passing one of the new flags or setting one of these properties will
silence this message.
You are about to deploy the following modules:
- vaulted-gift-112113/default (from [/Users/802619/Projects/src/4quorum_root/app.yaml])
Deployed URL: [https://20151106t204027-dot-vaulted-gift- 112113.appspot.com]
(add --promote if you also want to make this module available from
[https://vaulted-gift-112113.appspot.com])
Beginning deployment...
Verifying that Managed VMs are enabled and ready.
Provisioning remote build service.
Copying certificates for secure access. You may be prompted to create an SSH keypair.
Building and pushing image for module [default]
Saving [.dockerignore] to [/Users/802619/Projects/src/4quorum_root].
----------------------------- DOCKER BUILD OUTPUT ------------------------------
Step 0 : FROM golang:1.5.1
---> f6271e8f3723
Step 1 : ADD . /go/
---> 94fafc5e8a30
Removing intermediate container cfbe197f6e93
Step 2 : RUN go get github.com/revel/revel
---> Running in d7ad8c923144
---> b65877cf3049
Removing intermediate container d7ad8c923144
Step 3 : RUN go get github.com/revel/cmd/revel
---> Running in 2a9b3320ce47
---> 428defd008f3
Removing intermediate container 2a9b3320ce47
Step 4 : ENTRYPOINT revel run 4quorum-appengine dev 8080
---> Running in 8b9e38ec69ec
---> 3749ee8a6636
Removing intermediate container 8b9e38ec69ec
Step 5 : EXPOSE 8080
---> Running in a0e6c66b56c8
---> dafff62b9643
Removing intermediate container a0e6c66b56c8
Successfully built dafff62b9643
--------------------------------------------------------------------------------
Copying files to Google Cloud Storage...
Synchronizing files to [gs://staging.vaulted-gift-112113.appspot.com/].
Updating module [default]...|Deleted [https://www.googleapis.com/compute/v1/projects/vaulted-gift- 112113/zones/us-central1-f/instances/gae-builder-vm-20151106t204027].
Updating module [default]...failed.
ERROR: (gcloud.preview.app.deploy) Error Response: [4] Timed out creating VMs.
About to drop this.
Moved to heroku. Google App Engine is not ready yet.