Google Cloud App engine after Travis building - google-app-engine

I wanted to host my spring boot application on the Google App engine after building on Travis.
I have configured the following in my .travis.yml but I'm always getting the following error from GCP:
Step #0: Exception in thread "main" com.google.cloud.runtimes.builder.exception.ArtifactNotFoundException: No deployable artifacts were found. Unable to proceed.
I dont know how to fix this problem in my code
sudo: false
language: java
jdk:
- oraclejdk8
cache:
directories:
- "$HOME/.m2/repository"
before_deploy:
- mv target/healthcare-1.0.jar target/healthcare.jar
deploy:
provider: gae
skip_cleanup: true
keyfile: secret.json
project: healthcare-196408
file: target/healthcare.jar
before_install:
- chmod +x mvnw
- openssl aes-256-cbc -K $encrypted_07ec618bb998_key -iv $encrypted_07ec618bb998_iv
-in secret.json.enc -out secret.json -d
My app.yaml file:
runtime: java
env: flex
handlers:
- url: /.*
script: this field is required, but ignored
It seems like Travis isnt uploading my .Jar at all, but how can I fix this problem?
I have tried to build it as docker with the same result:
Invalid or corrupt jarfile /app.jar --from docker

Related

deploying to app engine flex custom runtime with cloud build causes a large amount of builds to run

i am Kinda new to cloud build so i am kind of confused about what is happening.
first this is my file structure
cloudbuild.yaml
backend/
Dockerfile
app.yaml
I had an application which i dockerized and deployed to app engine felx in custom runtime
here's my Dockerfile
FROM mcr.microsoft.com/dotnet/sdk:5.0-buster-slim AS build
WORKDIR /app
# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/aspnet:5.0-buster-slim AS base
ENV ASPNETCORE_URLS=http://+:80;
WORKDIR /app
COPY --from=build /app/out .
EXPOSE 80
ENTRYPOINT ["dotnet", "myapp.dll"]
and this is my app engine flex file
runtime: custom
env: flex
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
service: backend
network:
name: my-network
subnetwork_name: my-network-subnet
instance_tag: "backend"
forwarded_ports:
I have successfully deployed this app on app engine flex using this command
gcloud app deploy --appyaml=app.yaml
Then i added a cloudbuild.yaml file following this google doc
steps:
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: 'bash'
args: ['-c', 'gcloud config set app/cloud_build_timeout 2000 && gcloud app deploy --appyaml=backend/app.yaml']
as you can see in the cloudbuild.yaml i didnt add the timeout attribute because it gave me this error each time i tried to submit the build.
Error Response: [13] Error parsing cloudbuild.yaml for runtime custom: Argument is not an object: "2000s"
after removing the timeout attribute, cloud build started behaving in a weird way, it kept creating build jobs on its own until it reached over 20 builds.
i had to stop these builds manually because it exceeded the 120 minute free quota limit.
can some one tell me if my cloudbuild.yaml is the thing causing the issue or if its a problem with google cloud.
So the problem was writing the cloudbuild as a yaml file, instead i re-wrote it as a json file, i am not entirely sure why is the cloudbuild.yaml file was giving me errors but that was my solution.
{
"steps": [
{
"name": "gcr.io/google.com/cloudsdktool/cloud-sdk",
"entrypoint": "bash",
"args": [
"-c",
"gcloud config set app/cloud_build_timeout 1600 && gcloud app deploy --appyaml=app.yaml"
]
}
],
"timeout": "1600s"
}
Also the cloudbuild and app.yaml must be in the root of the branch with the cloudbuild file and Dockerfile.

gcloud app deploy fails: "Cloud build did not succeed within 10m"

I can't deploy a simple Flask/MySQL application on the Python standard environment; it times out.
ERROR: (gcloud.app.deploy) Error Response: [4] Cloud build did not succeed within 10m.
In the logs it says this:
Step #7 - "exporter": Layer 'google.python.appengine:config' SHA: sha256:c7053ac3e...
TIMEOUT
ERROR: context deadline exceeded
This is app.yaml (appropriately censored):
runtime: python37
instance_class: F2
handlers:
- url: /static
static_dir: static
- url: /.*
script: auto
env_variables:
GAE_USE_SOCKETS_HTTPLIB: True
DB_USER: XXXX
DB_PASS: XXXX
DB_NAME: XXXXXXXX
CLOUD_SQL_CONNECTION_NAME:XXXXXXXXXXX:us-west3:XXXXXXXXXXX
This is requirements.txt:
numpy==1.19.0
Flask==1.1.2
googleads>=24.0.0
SQLAlchemy==1.3.17
PyMySQL==0.9.3
google-search-results==1.8.3
I've tried all kinds of things but nothing works. This used to run with no problems; we haven't changed anything. It's basically a Hello World Flask app.

How to deploy react app in Elastic beanstalk in a docker platform using Travis CI?

Environment health has transitioned from Ok to Severe. ELB processes are not healthy on all instances. ELB health is failing or not available for all instances.
I am deploying a react app in AWS using the docker platform. I am getting HEALTH-Severe issues when I deploy my app. I have also added custom TCP inbound rules in the EC2 instance (source-anywhere).
I am using free tier in AWS. The following is my Dockerfile.
FROM node:alpine as builder
WORKDIR '/app'
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx
EXPOSE 80
COPY --from=builder /app/build /usr/share/nginx/html
My .travis.yml file:
language: generic
sudo: required
services:
- docker
before_install:
- docker build -t username/docker-react -f Dockerfile.dev .
script:
- docker run -e CI=true username/docker-react npm run test
deploy:
provider: elasticbeanstalk
region: us-east-2
app: "docker-react"
env: "DockerReact-env"
bucket_name: "my bucket-name"
bucket_path: "docker-react"
on:
branch: master
access_key_id: $AWS_ACCESS_KEY
secret_access_key: $AWS_SECRET_KEY
When I open my app I am getting 502 Bad Gateway error.
I had the same problem. After reading some of the documentation here I figured maybe docker-compose.yml is actually picked up first before anything. Deleting my docker-compose.yml (which I was only using locally) solved the issue for me.

Permission error on Gitlab CI deployment with google app engine

I try to make a full process of integrated deployment with Gitlab and Google AppEngine.
But the deployment fail because it seems I miss rights. I really don't know which right is needed here.
I activated the App Engine API and the Cloud Build API.
The rights my service has:
my gitlab-ci.yml file:
Staging Deployment:
stage: Deploy
environment:
name: Staging
before_script:
- echo "deb http://packages.cloud.google.com/apt cloud-sdk-jessie main" | tee /etc/apt/sources.list.d/google-cloud-sdk.list
- curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
- apt-get update
- apt-get -qq -y install google-cloud-sdk
script:
- echo $DEPLOY_KEY_FILE_PRODUCTION > /tmp/$CI_PIPELINE_ID.json
- gcloud auth activate-service-account --key-file /tmp/$CI_PIPELINE_ID.json
- gcloud config list
- gcloud config set project $PROJECT_ID_PRODUCTION
- gcloud --quiet --project $PROJECT_ID_PRODUCTION app deploy gae/test.yml
only:
refs:
- master
And finally my test.yml:
# [START django_app]
runtime: python37
service: testing
entrypoint: gunicorn -b :$PORT manta.wsgi --timeout 120
default_expiration: "5m"
env_variables:
DJANGO_SETTINGS_MODULE: manta.settings.dev
handlers:
# This configures Google App Engine to serve the files in the app's
# static directory.
- url: /static
static_dir: static/
# This handler routes all requests not caught above to the main app.
# It is required when static routes are defined, but can be omitted
# (along with the entire handlers section) when there are no static
# files defined.
- url: /.*
script: auto
# [END django_app]
And the error I have during deployment:
ERROR: (gcloud.app.deploy) Permissions error fetching application [apps/testing]. Please make sure you are using the correct project ID and that you have permission to view applications on the project.

Couldn't connect to the Docker daemon due to an SSL

I'm trying to deploy Managed VM (Python) on Google App / Compute Engine with command:
gcloud --verbosity debug preview app deploy ./app.yaml --set-default
during deployment VM instance is created but it exits on error (here is paste of last few lines of listing):
DEBUG: Display disabled.
Copying certificates for secure access. You may be prompted to create an SSH keypair.
DEBUG: Loaded Command Group: ['gcloud', 'compute', 'copy_files']
DEBUG: Detected docker environment variables: DOCKER_HOST=tcp://104.197.50.238:2376, DOCKER_CERT_PATH=../../../../../tmp/tmpPbKmOs, DOCKER_TLS_VERIFY=True
INFO: Starting new HTTPS connection (1): 104.197.50.238
DEBUG: Failed to connect to Docker daemon due to an SSL problem: [Errno 1] _ssl.c:523: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
DEBUG: (gcloud.preview.app.deploy) Couldn't connect to the Docker daemon due to an SSL problem.
Traceback (most recent call last):
File "/home/zdenulo/bin/google-cloud-sdk/./lib/googlecloudsdk/calliope/cli.py", line 591, in Execute
result = args.cmd_func(cli=self, args=args)
File "/home/zdenulo/bin/google-cloud-sdk/./lib/googlecloudsdk/calliope/backend.py", line 1191, in Run
resources = command_instance.Run(args)
File "/home/zdenulo/bin/google-cloud-sdk/./lib/googlecloudsdk/appengine/app_commands/deploy.py", line 208, in Run
implicit_remote_build)
File "/home/zdenulo/bin/google-cloud-sdk/./lib/googlecloudsdk/appengine/lib/deploy_command_util.py", line 137, in BuildAndPushDockerImages
with docker_util.DockerHost(cli, version_id, remote) as docker_client:
File "/home/zdenulo/bin/google-cloud-sdk/./lib/googlecloudsdk/appengine/lib/images/docker_util.py", line 215, in __enter__
return containers.NewDockerClient(local=(not self._remote), **kwargs)
File "/home/zdenulo/bin/google-cloud-sdk/./lib/googlecloudsdk/appengine/lib/docker/containers.py", line 313, in NewDockerClient
'Couldn\'t connect to the Docker daemon due to an SSL problem.' + msg)
DockerDaemonConnectionError: Couldn't connect to the Docker daemon due to an SSL problem.
ERROR: (gcloud.preview.app.deploy) Couldn't connect to the Docker daemon due to an SSL problem.
apparently there is problem with SSL but I have no idea how to solve it, and I'm quite desperate at the moment :)
I have:
Docker version 1.8.2, build 0a8c2e3
Boot2Docker-cli version: v1.8.0 Git commit: 9a26066
Google Cloud SDK 0.9.79
app 2015.09.23
app-engine-java 1.9.26
app-engine-python 1.9.26
bq 2.0.18
bq-nix 2.0.18
core 2015.09.23
core-nix 2015.09.03
gcloud 2015.09.21
gsutil 4.15
gsutil-nix 4.14
preview 2015.09.21
OpenSuse 13.2
OpenSSL 1.0.1k-fips 8 Jan 2015
I would very much appreciate help of any kind.
EDIT:
app.yaml
module: default
runtime: python27
api_version: 1
threadsafe: yes
vm: true
resources:
cpu: .5
memory_gb: 1.3
manual_scaling:
instances: 1
handlers:
- url: .*
script: main.app
Are you using homebrew Python on OS X? If so, there's an existing bug for OpenSSL and Docker here.
The easiest way around this is to temporary use a virtualenv with system python.
pip install virtualenv
virtualenv ~/system-python-env
source ~/system-python-env
gcloud preview app deploy ...
You can get at a short-lived SSL with "gcloud docker --authorize-only". Then immediately do your "gcloud preview app deploy..".
gcloud docker --authorize-only
gcloud preview app deploy app.yaml --promote

Resources