git push and deploy for golang appengine - google-app-engine

appcfg.py --oauth2 update ./ works but want to replace it with git push and deploy. After succesfull push, appengine gives me errors that my .go files are not compiled.
Is it possible for appengine to compile the files when using a git repo? If not do I need to place the compiled files in another directory?

This is a known issue. Please star it.
For now, all you can do is keep using appcfg.py. If it is cumbersome to do the OAuth authorisation all the time (or you have multiple accounts), consider storing your refresh token and then doing:
appcfg.py update . --oauth2 --oauth2_refresh_token=`cat .token`

Related

GAE: No longer able to update my Gaelyk project due to appcfg losing support

Recently tried to update my Gaelyk project (yes, it's old, but it works well and I still use it), but Google App Engine will no longer accept the update. The error message returned is "Deployments using appcfg are no longer supported. See https://cloud.google.com/appengine/docs/deprecations". The thing is, I never used appcfg to deploy my application; I used Gaelyk and Gradle. But obviously Gaelyk must have used appcfg under the covers.
I did download the replacement Google Cloud SDK, but this new tool is not similar at all to how Gaelyk and Gradle worked. Is there anything I can do to get Gaelyk to work anymore? Or is Gaelyk just dead and I need to rewrite my application (like in Node.js or something instead of Groovy).
This will be hard, however I will try to help you as possible. I think you may try to migrate it somehow to app.yaml configuration of GAE.
I am not sure what plugins are used in the project. From Gaelyk temple project I can see that it's using appengine-geb which, according to the documentation, behind the scenes, is using gradle-appengine-plugin (there is wrong link on this doc, but proper is bellow).
On the github of gradle-appengine-plugin I have found following.
There is a note:
NOTE: All App Engine users are encouraged to transition to the new
gradle plugin for their projects.
And in FAQ part there is following information:
How do I deploy with gcloud?
If you're using gcloud to deploy your application, the newest version of app deploy > doesn't support war
directories, you will need to provide it with an app.yaml OR you can
use the appengineStage task to create a directory that is deployable
in /build/staged-app
$ ./gradlew appengineStage
$ gcloud app deploy build/staged-app/app.yaml --project [app id]
--version [some version]
NOTES:
You must explicitly define all config files your want to upload
(cron.yaml, etc)
This does not work with EAR formatted projects.
I think the best option will be to migrate to new appenine plugin or if not possible try to implement is with gcloud app deploy command crating the config files manually (at least app.yaml). And for this migration I can provide you this document.
I hope you will manage somehow...
I can confirm that Serge's answer on the Gaelyk Groups site works; the same procedure that he figured out also worked for me. To summarize:
Run gradlew appengineRun as run previously with Gaelyk.
Copy all jar files inside the build\exploded-app\WEB-INF\lib folder into a \src\main\webapp\web-inf\lib folder (for me the new lib folder did not exist previously).
To deploy, use the new required gcloud tool, and instead of running gradlew appengineUpdate (which fails now), instead run
gcloud app deploy appengine-web.xml where that XML file can be found in your webapp/WEB-INF directory. I navigated to that directory to run the gcloud command, but you can use a relative path there if your working directory is elsewhere. (There are a number of optional flags associated with the gcloud app deploy command, but I didn't need any of them.)
Serge needed to use these instructions to convert datastore-indexes.xml to index.yaml and run gcloud app deploy index.yaml, however, I didn't need to do this because I had no datastores.

Multiple services with different dockerfiles on GAE Flexible

I'm using Google AppEngine Flexible with python environment. Right now I have two services: default and worker that share the same codebase, configured by app.yaml and worker.yaml. Now I need to install native C++ library, so I had to switch to Custom runtime and added Dockerfile.
Here is the Dockerfile generated by gcloud beta app gen-config --custom command
FROM gcr.io/google-appengine/python
LABEL python_version=python3.6
RUN virtualenv --no-download /env -p python3.6
# Set virtualenv environment variables. This is equivalent to running
# source /env/bin/activate
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ADD requirements.txt /app/
RUN pip install -r requirements.txt
ADD . /app/
CMD exec gunicorn --workers=3 --threads=3 --bind=:$PORT aces.wsgi
Previously my app.yaml and worker.yaml each had it's own entrypoint: config that specified the command needed to be run to start the service.
So, my question is how can I use two different commands to start the services?
EDIT 1
So far I was able to solve this by rewriting CMD line in dockerfile for each deploy of each service. However, I'm not quite satisfied with this solution.
gcloud app deploy command has --image-url flag that allows to set image url from GCR. I haven't researched that yet, but it seems that I can just upload images to GCR and use the urls since don't change that often
Yes, as you mentioned, I think using the --image-url flag, is a good option here.
Specify a custom runtime.
Build the image locally, tag it, and push it to Google Container Registry (GCR)
then, deploy your service, specifying a custom service file, and specifying the remote image on GCR using the --image-url option.
Here's an example that accomplishes different entrypoints in 2 services that share the same code:
...this is assuming that the "flex" and not "standard" app engine offering is being used.
lets say you have a: project called my-proj
with a default service that is not important
and a second service called queue-processor which is using much of the same code from the same directory.
Create a separate dockerfile for it called QueueProcessorDockerfile
and a separate app.yaml called queue-processor-app.yaml to tell google app engine what i want to happen.
QueueProcessorDockerfile
FROM node:10
# Create app directory
WORKDIR /usr/src/app
COPY package.json ./
COPY yarn.lock ./
RUN npm install -g yarn
RUN yarn
# Bundle app source
COPY . .
CMD [ "yarn", "process-queue" ]
*of course i have a "process-queue" script in my package.json
queue-processor-app.yaml
runtime: custom
env: flex
... other stuff...
...
build and tag the docker image
Check out googles guide here -> https://cloud.google.com/container-registry/docs/pushing-and-pulling
docker build -t eu.gcr.io/my-proj/queue-processor -f QueueProcessorDockerfile .
push it to GCR
docker push eu.gcr.io/my-proj/queue-processor
deploy the service, specifying which yaml config file google should use, as well as the image url you have pushed
gcloud app deploy queue-processor-app.yaml --image-url eu.gcr.io/my-proj/queue-processor
Since the Dockerfile name cannot be changed, the only way to not have to modify the Dockerfile would be to store each service in its own, separate directory. Clean separation, each service has its own Dockerfile and/or startup configuration.
But this raises a question: how to deal with the code shared by multiple services? Using symlinks (which works great for sharing code across standard env services) doesn't work for the flexible env services, see Sharing code between flexible environment modules in a GAE project.
I see a few possible approaches, none really ideal, but maybe more appealing than what you currently have:
hard-link each and every shared source code file (since hardlinking directories is not possible). A bit tedious and error-prone, but you only have to do that once per file
package and publish your shared code as an external library, added to the requirements.txt file of each service using it
split the shared code in a separate repository and have a copy of that repository in each service using it (maybe as a git submodule if using git?). You just need to ensure at the service deployment time that the shared repository is pulled at the proper version - can be quite reliably done through automation. A bit more complicated if you have uncommited changes in this repo - you'd have to patch the same changes in all services.
have multiple copies of the Dockerfiles with different names which you simply copy over instead of always editing the same file. Symlinking instead of copying might work as well, since the symlink doesn't need to be followed outside of the service directory, if it's just replicated as a symlink it'll work.
So i had a very similar issue with my Java applications. We were looking to migrate from Heroku to GAE and were attempting to simulate the Heroku Procfile with GAE services. Effectively what we did was to create separate directories in our application src/main/appengine/web and src/main/appengine/worker where each directory conainted the app.yaml and Dockerfile specific to the process. Then using the mvn appengine:deploy capabilities, we specified the -Dapp.stage.dockerDirectory and -Dapp.stage.appEngineDirecory respectively for each service we wanted to deploy. Then using just some parameters we were able to basically script out parallel deployments of each service from the same code base. Not sure if this works in your situation, but it was very useful for us: Here are the two example commands in their entirety:
Web Process:
mvn appengine:deploy -Dapp.stage.dockerDirectory=src/main/appengine/web -Dapp.stage.appEngineDirectory=src/main/appengine/web -Dapp.stage.stagingDirectory=target/appengine-web -Dapp.deploy.projectId=${project-id} -Dapp.deploy.version=${project-version}
Worker Process:
mvn appengine:deploy -Dapp.stage.dockerDirectory=src/main/appengine/worker -Dapp.stage.appEngineDirectory=src/main/appengine/worker -Dapp.stage.stagingDirectory=target/appengine-worker -Dapp.deploy.projectId=${project-id} -Dapp.deploy.version=${project-version}

GoogleAppEngine wordpress complains (Could not create directory) on update and plugin install

I have a problem updating wordpress or installing wordpress plugin in my local version of google app engine. I get the following message.
Downloading install package from
https://downloads.wordpress.org/plugin/event-espresso-decaf.4.8.38.decaf.zip…
Unpacking the package…
Could not create directory.
With the same code base I am able install plugin using MAMP (Macintosh, Apache, MySQL, and PHP). However fails with google Pyton script (dev_appserver.py).
I tried changing the permission of the file system by giving privilege to all user for write. Tried executing as sudo dev_appserver.py .. Followed the advice in other post, No luck.
Whats the problem here?, With MAMP all looks good in my local, but the same code break as I deploy to GAE (​appcfg.py -A APP-ID update app.yaml). Whats the problem here
Cannot write to file on Google App Engine Dev server with PHP?
Staring SDK 1.9.18, dev_appserver disables local file writing by default to better simulate the production environment. You can enable file writing by adding "google_app_engine.disable_readonly_filesystem=1" to your php.ini file.
I also wrote a blog post about running WordPress on App Engine flexible environment at:
https://wp.gaeflex.ninja/2016/03/25/using-wordpress-multisite/
Please give it a try too.

Google App Engine Source Not Updating PHP

I cannot get my server code to update. I'm running a PHP instance on GAE and no matter what I do, the files won't update. In the source code view, I can see the files have updated, but when I attempt to access the updated file, I'm still viewing the old version. I've also attempted disconnecting my Bitbucket repo and using the appcfg.py update project-name command, but the files aren't refreshing when I attempt to access them. I'm not sure what to do to force the changes to take place.
My app.yaml contains the following code
- url: /(.+\.php)$
script: \1
secure: always
So the files should be getting read, right?
I was able to figure out what went wrong. I downloaded my code using appcfg.py download_app -A <your_app_id> -V <your_app_version> <output-dir> and noticed that I was downloading the old versions of the files (and wasn't downloading the new files). Turns out using source control within GAE will upload new code, but won't deploy it. I attempted to use appcfg.py update project-name one more time, but it didn't work. Turns out I didn't disconnect my Bitbucket account (could have sworn that I did...). Once disconnected, I was able to update the project using appcfg.py update project-name. While I was figuring this out, I reached out to Google support and received this message:
To use the feature of push to deploy you need to spin-up the Jenkins
Instance on GCE (Google Compute Engine) and then it will take the
updated code and execute it in the environment. Go through [1] for how
to enable the Jenkins instance and its configuration according to
different run time.
In your issue, you just mirrored the code from Bit Bucket to Cloud
Repository, as it is just doing the version control for the
application not executing the application. So basically you have have
the option of using Jenkins instance as I described above to test the
different version of the code or using the appcfg.py update command
from your local repository.
I haven't attempted to install and use Jenkins since I fixed it after disconnecting my Bitbucket account), but it may help others who have run into this problem.

AngularJS Continuous Deployment Tools

I have been trying out Codeship and Heroku for continuous deployment of an AngularJS application I writing at the moment. The app was created using Yeoman and uses bower and grunt. Initially I thought this seemed like a really good setup as Codeship was free to use and I was quickly able to configure this to build my AngularJS project and it offered the ability to add a deployment step after the build. There were even many PaaS providers to choose from (Heroku, S3, Google App Engine etc). However I seem to have become a bit stuck with getting the app to run on Heroku.
The problem started from the fact that all the documentation suggested that I remove the /dist path from my .gitignore so that this directory is published to Heroku post build. This was mainly from documentation that talked about publishing to Heroku from a local machine, but I figure this is all Codeship is doing under the hood anyway. I didn't want to do this as I don't believe I should be checking build output into source control. The /dist folder was added to .gitignore for a good reason. Furthermore, this kind of defeats the point of having a CI server somewhat, as I might as well just push the latest build from my machine.
After some more digging around I found out that I could add a postinstall step to my packages.json file such as bower install && grunt build which would re-run the build on Heroku and hence repopulate all the bower dependencies (something else they wanted me to check in to source control!) and the dist directory.
After giving this a try it became apparent that I would need to add bower and grunt as dependencies in packages.json, which meant moving them from devDependencies which is where they should belong!
So I now seem to be stuck. All I want to do is publish my build artefacts (/dist) the dependencies (/bower_components) and the server.js file that will run the site. Does anyone know how to achieve this with Heroku and Codeship? Alternatively has anyone had any success with this using different tools. I am looking for something that is free and I am willing to accept that it will not be production stable (won't scale to multiple servers etc), but this is fine for now as all I want to do is continuously deploy the app for internal testing and to be able to share the output with non-technical members of my team so we can discuss features we'd like to prioritise etc.
Any advice would be greatly appreciated.
Thanks
Ahoy, Marko from the Codeship crew here. Did you already send us an in app message about this? I'm sure we can get your application building on Codeship and deploying to Heroku successfully.
As a very short answer, the easiest way to get this running would be to add both bower and grunt to your dependencies in the package.json. Another possibility would be to look for a custom buildpack with both tools already installed.
And finally you could also run the tools on Codeship, add the newly installed files to the repository, commit the changes and push this new commit to Heroku. If you want to use this, you'd very probably need to force push the changes though.
Feel free to reach out to me via the in app messenger (lower right corner of the site) and I'd be happy to help you get this working!
I found two ways to get this to work.
Heroku Node Custom Buildpack
Use the mbuchetics Heroku build pack. This works by basically re-building the app once it has been pushed to Heroku.
There were a few tricks I had to employ still to make this work. In Gruntfile.jstwo new tasks needed to be configured called heroku:production and heroku:development. This is what the buildpack executes to build the app. I initially just aliased the main build task, but found that the either the buildpack or Heroku had a problem with running jshint so in the end I copied the build task and took out the parts that I didn't need.
Also in packages.json I had to add this:
"scripts": {
"postinstall": "bower cache clean && bower install"
}
This made sure the bower_components were available in Heroku.
Pros
This allowed me to keep the .gitignore file in tact so that the 'binaries' in the dist directory and the dependencies in the bower_components directory were not committed into source control.
Cons
This is basically re-building the app once it is on Heroku and I generally prefer to use the same 'binaries' throughout the entire build and deployment pipeline. That way I know that the same code that was built, is the same code that was tested and is the same code that was deployed.
It also slows down the deployment as you have to wait for the app to build twice.
CodeShip Custom Script Deployment
Not being satisfied with the fact I was building my app twice, I tried using a Custom Script pipeline in CodeShip instead of the pre-existing Heroku one. The script basically modified the .gitignore file to allow the dist folder to be committed and then pushed to the Heroku remote (which leaves the code on the origin remote unaffected by the change).
I ended up with the following bash script:
#!/bin/bash
gitRemoteName="heroku_$APP_NAME"
gitRemoteUrl="git#heroku.com:$APP_NAME.git"
# Configure git remote
git config --global user.email "you-email#example.com"
git config --global user.name "Build"
git remote add $gitRemoteName $gitRemoteUrl
# Allow dist to be pushed to heroku remote repo
echo '!dist' >> .gitignore
# Also make sure any other exclusions dont apply to that directory
echo '!dist/*' >> .gitignore
# Commit build output
git add -A .
herokuCommitMessage="Build $CI_BUILD_NUMBER for branch $CI_BRANCH. Commited by $CI_COMMITTER_NAME. Commit hash $CI_COMMIT_ID"
echo $herokuCommitMessage
git commit -m "$herokuCommitMessage"
# Must merge the last build in Heroku remote, but always chose new files in merge
git fetch $gitRemoteName
git merge "$gitRemoteName/master" -X ours -m "Merge last build and overwrite with new build"
# Branch is in detached mode so must reference the commit hash to push
git push $gitRemoteName $(git rev-parse HEAD):refs/heads/master
Pros
This only require a single build of the app and deploys the same binaries that were tested during the test phase.
Cons
I've used this script quite a few times now and it seems relatively stable. However one issue I know of is that when a new pipeline is created there will be no code on the master branch so this script fails when it tries to do the merge from the heroku remote. At the moment I get around this by doing an initial push of the master branch to Heroku before kicking off a build, but I imagine there is probably a better Git command I could run along the lines of; 'only merge this branch if it already exists'.

Resources