Is there any way to run the compiler on an App Engine application written in Go without continue to serve the application with the development server and instead get an exit code?
Because I want to add a check in my automated tests in Travis that the application actually compiles.
To clarify: I have access to the App Engine SDK / Development Server in Travis, but I dont want to run goapp serve since it never exits.
Without actually implementing test, your solution looks pretty hacky. Why not use goapp build? Here's my .travis.yml:
language: go
go:
- 1.2.1
# Grab newest version and suck down
install:
- export FILE=go_appengine_sdk_linux_amd64-$(curl https://appengine.google.com/api/updatecheck | grep release | grep -o '[0-9\.]*').zip
- curl -O https://storage.googleapis.com/appengine-sdks/featured/$FILE
- unzip -q $FILE
# Run build and tests
script:
- ./go_appengine/goapp test ./tests; # If you are testing
- ./go_appengine/goapp build ./packagedir; # Wherever you keep your stuff
For reference on tests or just to see a project that builds
Edit:
It has been awhile, but I noticed recently that some of my builds randomly break. It is infuriating and I have occasionally hardcoded SDK values to overcome this. No more. Here's a very hacky implementation of grabbing the first featured (and thus hosted as /updatecheck fails to always return a hosted version) of the SDK desired:
export FILE=$(curl https://storage.googleapis.com/appengine-sdks/ | grep -o 'featured/go_appengine_sdk_linux_amd64-[^\<]*' | head -1)
For just the file:
export FILE=$(curl https://storage.googleapis.com/appengine-sdks/ | grep -oP '(?<=featured/)go_appengine_sdk_linux_amd64-[^\<]*' | head -1)
I solved this by adding an empty Unit test at the entry point of the application (main_test.go). This unit test will force the whole application to compile.
Then I execute all unit tests by putting goapp test ./... in the script section.
Related
I am working on a take home project for a job interview where I have been tasked to create a react application with frontend and backend tests. They have instructed me that they will execute my code using "the make targets specified in Makefile(which they have provided. see later)". I have completed the application and test cases, however I am quite lost on this last step.
I understand that a makefile is used to execute essentially a script, but unsure of how to tell it to set up the application environment.
Here is a copy of the Makefile.txt
.PHONY: $(MAKECMDGOALS)
# `make setup` will be used after cloning or downloading to fulfill
# dependencies, and setup the the project in an initial state.
# This is where you might download rubygems, node_modules, packages,
# compile code, build container images, initialize a database,
# anything else that needs to happen before your server is started
# for the first time
setup:
# `make server` will be used after `make setup` in order to start
# an http server process that listens on any unreserved port
# of your choice (e.g. 8080).
server:
# `make test` will be used after `make setup` in order to run
# your test suite.
test:
Any help, understanding or clarification is appreciated.
I think it is about scripts that you have in package.json file - link, they probably want to have something like make setup script and all others that will run specific functionality for example if they type make test in terminal it will start running tests like npm test
I have a Google Cloud Platform project with several GCE instances that I use daily. I decided I wanted to mess around with app engine and deployed a sample version of an application that I would now like to get rid of. While I've disabled the app is there any way to delete it without deleting the entire project? I've tried appcfg.sh delete_version appengine-dir -V 1 but I get Bad argument: You must specify a version ID via -V or --version. I've tried doing appcfg.sh delete_version appengine-dir --version=1 but get the same thing. I'm going to be really disappointed if I have to download all the data off of my instances and re-deploy the entire project just to get rid of an app engine app which will never be used again. I am aware this is technically speaking a duplicate question but all of the answers I've found are for older versions of app engine and I just get redirected to the new console which doesn't seem to have the same options.
EDIT: Turns out doing appcfg.sh -A projID -V 1 delete_version appengine-dir works and doesn't give me any of those errors but I get Cannot delete the default version of the default module. I get the feeling I just can't do this at all which I personally find really really dumb.
It is not currently possible to delete the default module of an App Engine application.
There is however an open feature request Issue 12984 for this. Feel free to star this public issue to support this request and receive updates regarding its progress.
This is what I put in my cloudbuild.yaml to delete versions older than count 5.
# Remove old GAE versions
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
versions=$(gcloud app versions list \
--service default \
--sort-by '~version' \
--format 'value(VERSION.ID)' | sed 1,5d)
for version in $versions; do
gcloud app versions delete "$version" \
--service default \
--quiet
done
This can now be accomplished on the command line using:
gcloud app versions delete <version-name>
I'm now able to run a Dart app using
gcloud --verbosity debug preview app run app.yaml
and also to deploy and run on AppEngine
gcloud --verbosity debug preview app deploy app.yaml
but I haven't found a way to connect a debugger to the Dart app running on the development server.
I found http://dartbug.com/21067 but still couldn't find a way to make it work.
See also https://groups.google.com/a/dartlang.org/forum/#!topic/cloud/OK1nJtg7AjQ
Update 2015-02-27
The app can be run without Docker and then be debugged like any Dart command line application:
Source. https://groups.google.com/a/dartlang.org/d/msg/cloud/zrxgOHFz_lA/q5CdLLQPBAgJ
The API server is part of the App Engine SDK, and we are using it for
running tests in the appengine package. If you look at
https://github.com/dart-lang/appengine/blob/master/tool/run_tests.sh
you will see that it expects the environment variable
APPENGINE_API_SERVER.
The API server is in /platform/google_appengine/api_server.py
and takes a number of arguments. I just tested running it like this:
$ $CLOUD_SDK/platform/google_appengine/api_server.py \ -A
dev~test-application \ --api_port 4444 \ --high_replication \
--datastore_path /tmp/datastore
To run an app engine application outside the normal development server
requires that a number of environment variables are set. This worked
for my application:
$ GAE_LONG_APP_ID=test-application \ GAE_MODULE_NAME=default \
GAE_MODULE_VERSION=version \ GAE_PARTITION=dev \ API_PORT=4444 \
API_HOST=127.0.0.1 \ dart bin/server.dart
In the Dart Editor you cannot set environment variables for each
launch configuration, so they have to be set globally before starting
the Dart Editor. In WebStorm it is possible to have run configuration
specific environment variables.
This simple setup will of cause not support everything the normal
development server support. Some of the issues are:
Only one application at the time as it is always listening on port
8080 (can easily be made configurable) * The users API (mocking this
shouldn't be that difficult) * The modules API * No health-checks
(should not be a problem) * All HTTP headers are direct from the
client (no x-appengine- headers) * The admin web interface is not
available * Probably other stuff as well
This is all experimental, but it is one solution for a simpler
developer setup, which of cause does not match the deployment
environment as closely as the development server.
Running the API Server using Docker is also possible as the image
google/cloud-sdk with the Cloud SDK is on hub.docker.com.
Use the following Dockerfile
FROM google/cloud-sdk EXPOSE 4444 ENTRYPOINT
["/google-cloud-sdk/platform/google_appengine/api_server.py", \
"-A", "dev~test-application", \ "--api_port", "4444", \
"--high_replication", \ "--datastore_path", "/tmp/datastore"]
Build and run
$ docker build -t api_server . $ docker run -d -p 4444:4444 api_server
Change API_HOST above to 192.166.59.103 (of wherever your Docker
containers are) and run.
Regards, Søren Gjesse
Update 2014-11-27
Debugging from DartEditors debugger started working with the bleeding Dart build 1.8.0.edge_042017.
I assume that the next dev build (probably 1.9.0-dev1.0) will include the related fixes as well?
Detailed steps how this works can be found here: https://groups.google.com/a/dartlang.org/d/msg/cloud/OK1nJtg7AjQ/u-GzUDI-0VIJ
Build a custom Docker image with the latest Dart dev build 1.8.0-dev.4.6.
The Dart team is actually preparing an easy way to do this yourself (see https://github.com/dart-lang/dart_docker)
Installe the latest bleeding_edge on the host system (using this script https://gist.github.com/zoechi/d240f56a32ed5649797f or manual download from http://gsdview.appspot.com/dart-archive/channels/be/raw/latest/editor/darteditor-linux-x64.zip)
Add this to the app.yaml file
env_variables:
DBG_ENABLE: 'true'
# disable health-checking because this is so annoying during debugging
vm_health_check:
enable_health_check: False
See How to disable health checking for `gcloud preview app run` for more details about customizing health checking.
Launch the server code of your app with glcoud --verbosity debug app run app.yaml or glcoud --verbosity debug app run app.yaml index.yaml
Wait until the Docker container is ready (check with docker ps if the Command column shows a value starting with /dart_runtime/dart_
Open DartEditor
Open Menu Run > Remote Connection...
Connect to: Command-line VM
Host: localhost if you dont use boot2dockeror the IP address returned by the commandboot2docker ip`
Port: 5005
Select Folder... select the directory which contains the source code of your project.
Click OK
Set breakpoints and continue as usual.
Old
A first step is using the Observatory which includes a browser based debugger UI.
To make this work add the following lines to the app.yaml file
network:
forwarded_ports: ["8181"]
This might be useful as well to make the server.dart wait until we had the chance to set breakpoints using the observatory.
env_variables:
DART_VM_OPTIONS: '--pause-isolates-on-start'
boot2docker gives us the Docker ip (192.168.59.103) and after starting with gcloud preview app run app.yaml we can connect to http://192.168.59.103:8181 which should open the Observatory GUI.
I am running dev_appserver.py and so it auto builds as I save go files (I need to run this and not goapp because of log_level I need).
When there is a successful build I would like the tests for the project (goapp test) to run automatically. How can I do this?
You could use something like https://github.com/nf/watch in a separate terminal window. It'll re-run tests in parallel to dev_appserver.py.
Install: go get github.com/nf/watch
Run from your app's directory: watch goapp test
With AppEngine, I use the entr app like so:
$ find ./*.go | entr goapp test
I would like to run a SOLR Server on an Elastic Beanstalk. But I cannot find that much about that in the web.
It must be possible somehow, 'cause some are using it already. (https://forums.aws.amazon.com/thread.jspa?threadID=91276 i.e.)
Any Ideas how I could do that?
Well, somehow I can upload the solr warfile into the environment, but then it gets complicated.
Where do I put the config files and the index directory, so that each instance can reach it?
EDIT: Please keep in mind that this answer is from 2013. The products mentioned here have likely evolved. I have updated the documentation link to reflect changes in the solr clustering wiki. I encourage you to continue your research after reading this information.
ORIGINAL:
It only really makes sense to run solr on beanstalk instances if you are planning to only ever use the single server deploy. The minute that you want to scale your app you will need to configure your beanstalk environment to either create a solr cluster or move to something like CloudSearch. If you are unfamiliar with ec2 lifecycles and solr deployments then CloudSearch will almost certainly save you time (read money).
If you do want to run solr on a single instance then you can use rake to launch it by adding a file to your local repo named .ebextensions/solr.config with the following contents:
container_commands:
01create_post_dir:
command: "mkdir -p /opt/elasticbeanstalk/hooks/appdeploy/post"
ignoreErrors: true
02killjava:
command: "killall java"
test: "ps uax | grep java | grep root"
ignoreErrors: true
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/99_start_solr.sh":
mode: "755"
owner: "root"
group: "root"
content: |
#!/usr/bin/env bash
. /opt/elasticbeanstalk/support/envvars
cd $EB_CONFIG_APP_CURRENT
su -c "RAILS_ENV=production bundle exec rake sunspot:solr:start" $EB_CONFIG_APP_USER
su -c "RAILS_ENV=production bundle exec rake db:seed" $EB_CONFIG_APP_USER
su -c "RAILS_ENV=production bundle exec rake sunspot:reindex" $EB_CONFIG_APP_USER
Please keep in mind that this will cause chaos if you are using autoscaling.