I try to make a full process of integrated deployment with Gitlab and Google AppEngine.
But the deployment fail because it seems I miss rights. I really don't know which right is needed here.
I activated the App Engine API and the Cloud Build API.
The rights my service has:
my gitlab-ci.yml file:
Staging Deployment:
stage: Deploy
environment:
name: Staging
before_script:
- echo "deb http://packages.cloud.google.com/apt cloud-sdk-jessie main" | tee /etc/apt/sources.list.d/google-cloud-sdk.list
- curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
- apt-get update
- apt-get -qq -y install google-cloud-sdk
script:
- echo $DEPLOY_KEY_FILE_PRODUCTION > /tmp/$CI_PIPELINE_ID.json
- gcloud auth activate-service-account --key-file /tmp/$CI_PIPELINE_ID.json
- gcloud config list
- gcloud config set project $PROJECT_ID_PRODUCTION
- gcloud --quiet --project $PROJECT_ID_PRODUCTION app deploy gae/test.yml
only:
refs:
- master
And finally my test.yml:
# [START django_app]
runtime: python37
service: testing
entrypoint: gunicorn -b :$PORT manta.wsgi --timeout 120
default_expiration: "5m"
env_variables:
DJANGO_SETTINGS_MODULE: manta.settings.dev
handlers:
# This configures Google App Engine to serve the files in the app's
# static directory.
- url: /static
static_dir: static/
# This handler routes all requests not caught above to the main app.
# It is required when static routes are defined, but can be omitted
# (along with the entire handlers section) when there are no static
# files defined.
- url: /.*
script: auto
# [END django_app]
And the error I have during deployment:
ERROR: (gcloud.app.deploy) Permissions error fetching application [apps/testing]. Please make sure you are using the correct project ID and that you have permission to view applications on the project.
Related
i am Kinda new to cloud build so i am kind of confused about what is happening.
first this is my file structure
cloudbuild.yaml
backend/
Dockerfile
app.yaml
I had an application which i dockerized and deployed to app engine felx in custom runtime
here's my Dockerfile
FROM mcr.microsoft.com/dotnet/sdk:5.0-buster-slim AS build
WORKDIR /app
# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/aspnet:5.0-buster-slim AS base
ENV ASPNETCORE_URLS=http://+:80;
WORKDIR /app
COPY --from=build /app/out .
EXPOSE 80
ENTRYPOINT ["dotnet", "myapp.dll"]
and this is my app engine flex file
runtime: custom
env: flex
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
service: backend
network:
name: my-network
subnetwork_name: my-network-subnet
instance_tag: "backend"
forwarded_ports:
I have successfully deployed this app on app engine flex using this command
gcloud app deploy --appyaml=app.yaml
Then i added a cloudbuild.yaml file following this google doc
steps:
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: 'bash'
args: ['-c', 'gcloud config set app/cloud_build_timeout 2000 && gcloud app deploy --appyaml=backend/app.yaml']
as you can see in the cloudbuild.yaml i didnt add the timeout attribute because it gave me this error each time i tried to submit the build.
Error Response: [13] Error parsing cloudbuild.yaml for runtime custom: Argument is not an object: "2000s"
after removing the timeout attribute, cloud build started behaving in a weird way, it kept creating build jobs on its own until it reached over 20 builds.
i had to stop these builds manually because it exceeded the 120 minute free quota limit.
can some one tell me if my cloudbuild.yaml is the thing causing the issue or if its a problem with google cloud.
So the problem was writing the cloudbuild as a yaml file, instead i re-wrote it as a json file, i am not entirely sure why is the cloudbuild.yaml file was giving me errors but that was my solution.
{
"steps": [
{
"name": "gcr.io/google.com/cloudsdktool/cloud-sdk",
"entrypoint": "bash",
"args": [
"-c",
"gcloud config set app/cloud_build_timeout 1600 && gcloud app deploy --appyaml=app.yaml"
]
}
],
"timeout": "1600s"
}
Also the cloudbuild and app.yaml must be in the root of the branch with the cloudbuild file and Dockerfile.
I am trying to add Cloud Build on top of my App Engine Flask app. Everything works, but for some reason, I can't access the substitution variables I declared in the trigger.
Env vars are still being fetched from app.yaml. And they are parsed literally, not as variables. When I remove it from app.yaml Python throws a NoneType error.
[Trigger][1]: https://i.stack.imgur.com/Ii6Jv.png
[App.yaml][2]: https://i.stack.imgur.com/bg646.png
runtime: python310
instance_class: F4
automatic_scaling:
max_instances: 8
env_variables:
_CONFIG_TYPE: ${_CONFIG_TYPE}
[cloudbuild][3] https://i.stack.imgur.com/jo0PN.png
steps:
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: 'bash'
args: ['-c', 'gcloud config set app/cloud_build_timeout 1600 && gcloud app deploy']
timeout: '1600s'
substitutions:
_CONFIG_TYPE: ${_CONFIG_TYPE}
It won't work because gcloud app deploy command start a new Cloud Build, behind the scene, to build a container with your code and to deploy it. Your env var will change anything
The solution is to perform a bash replacement, with sed for instance.
app.yaml file
runtime: python310
instance_class: F4
automatic_scaling:
max_instances: 8
env_variables:
_CONFIG_TYPE: ##_CONFIG_TYPE##
Cloud Build Step with env var usage
steps:
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: 'bash'
env-vars:
- CONFIG_TYPE: ${_CONFIG_TYPE}
args:
- '-c'
- |
sed -i "s/##_CONFIG_TYPE##/$${CONFIG_TYPE}/g" app.yaml
gcloud config set app/cloud_build_timeout 1600
gcloud app deploy
timeout: '1600s'
substitutions:
_CONFIG_TYPE: ${_CONFIG_TYPE}
Cloud Build Step without env var usage
steps:
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: 'bash'
args:
- '-c'
- |
sed -i "s/##_CONFIG_TYPE##/${_CONFIG_TYPE}/g" app.yaml
gcloud config set app/cloud_build_timeout 1600
gcloud app deploy
timeout: '1600s'
substitutions:
_CONFIG_TYPE: ${_CONFIG_TYPE}
I am using gitlab and deploying it to google app engine for my nodejs application.
Google Service access is added as variable in gitlab settings
SERVICE_ACCOUNT_KEY:
{
"type": "service_account",
"project_id": "node-us",
"private_key_id": "",
"private_key": "",
"client_email": "gitlab-demo-service-account#node-us.iam.gserviceaccount.com",
"client_id": "",
"auth_uri": "",
"token_uri": "",
"auth_provider_x509_cert_url": "",
"client_x509_cert_url": ""
}
.gitlab-ci.yml
image: node:latest
cache:
paths:
- node_modules/
before_script:
- echo "deb http://packages.cloud.google.com/apt cloud-sdk-jessie main" | tee /etc/apt/sources.list.d/google-cloud-sdk.list
- curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
- apt-get update
- apt-get -qq -y install google-cloud-sdk
deploy_production:
stage: deploy
environment: Production
only:
- master
script:
- echo $SERVICE_ACCOUNT_KEY > /tmp/$CI_PIPELINE_ID.json
- gcloud auth activate-service-account --key-file /tmp/$CI_PIPELINE_ID.json
- gcloud --quiet --project node-us app deploy app.yaml
after_script:
- rm /tmp/$CI_PIPELINE_ID.json
my root folder has app.yaml file and .env file
As of now I was testing the flow which worked fine and deployed successfully to google app engine. (it does not contain any secret keys)
However I want the my env variables (containing secret keys) need to be ignored in .gitignore also not to be part of app.yaml file.
How can I pass my env secret keys?
Don't pass it!
Use Secret Manager to pass your secret. So, in your repository, use the secret manager URI to reference the secret, with the secret version. Like this, no secret in your code or in the app.yaml/.env files.
If you need to update the secret, do it manually. Some tasks are hard, or expensive, to automate.
Note: The article that you mention has been released 6 months before Secret Manager release (early this year 2020)
I'm trying to dockerize a MERN stack but when the time comes when react has to start, the container exit with status 0.
This is the log error:
This is the structure of my project:
- project
- server
- api
api.yml
server.js
Dockerfile
- www
app.yml
Dockerfile
docker-compose.yml
The www folder contains the starter files the will be generate from npx create-react-app www.
The content of server/Dockerfile is:
FROM node:latest
RUN mkdir -p /usr/server
WORKDIR /usr/server
RUN npm install -g nodemon
EXPOSE 3000
CMD [ "npm", "start" ]
the content of www/Dockerfile is:
FROM node:latest
RUN mkdir -p /usr/www/src/app
WORKDIR /usr/www/src/app
EXPOSE 3000
CMD [ "npm", "start" ]
and, in the end the content of docker-compose.yml is:
version: '3.7'
services:
mongodb:
image: mongo
ports:
- 27017:27017
api:
build: ./server/
ports:
- "6200:6200"
volumes:
- ./server:/usr/server
depends_on:
- mongodb
www:
build: ./www/
ports:
- 3000:3000
volumes:
- ./www:/usr/www/src/app
depends_on:
- api
Now, in the title I mention Google Cloud, because I tried to distribute the different "server" and "www" parts in my production environment. The distribution of server works correctly butwww fails with this error:
The errors that are generated by Docker and Google Cloud seem very similar or am I wrong? Could it be a reaction problem or am I wrong in both cases?
I also leave the contents of the app.yaml andapi.yaml files.
app.yaml
runtime: nodejs
env: flex
# Only for developing
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
handlers:
- url: /.*
static_files: build/index.html
upload: build/index.html
- url: /
static_dir: build
The content of the api.yaml file is the same as that of app.yaml but without the handlers section.
I wanted to host my spring boot application on the Google App engine after building on Travis.
I have configured the following in my .travis.yml but I'm always getting the following error from GCP:
Step #0: Exception in thread "main" com.google.cloud.runtimes.builder.exception.ArtifactNotFoundException: No deployable artifacts were found. Unable to proceed.
I dont know how to fix this problem in my code
sudo: false
language: java
jdk:
- oraclejdk8
cache:
directories:
- "$HOME/.m2/repository"
before_deploy:
- mv target/healthcare-1.0.jar target/healthcare.jar
deploy:
provider: gae
skip_cleanup: true
keyfile: secret.json
project: healthcare-196408
file: target/healthcare.jar
before_install:
- chmod +x mvnw
- openssl aes-256-cbc -K $encrypted_07ec618bb998_key -iv $encrypted_07ec618bb998_iv
-in secret.json.enc -out secret.json -d
My app.yaml file:
runtime: java
env: flex
handlers:
- url: /.*
script: this field is required, but ignored
It seems like Travis isnt uploading my .Jar at all, but how can I fix this problem?
I have tried to build it as docker with the same result:
Invalid or corrupt jarfile /app.jar --from docker