wrong import after updating go appengine - google-app-engine

I have updated goappengine to the last version and I have this error message. It looks like that it is looking for Go 1.4.2 and I am running 1.5.2 ?? It unlikely that I was running 1.4 before.
Any pointers of the problem
➜ dev git:(master) ✗ goapp serve
INFO 2016-01-25 02:31:53,808 devappserver2.py:769] Skipping SDK update check.
INFO 2016-01-25 02:31:53,832 api_server.py:205] Starting API server at: http://localhost:65025
INFO 2016-01-25 02:31:53,836 dispatcher.py:197] Starting module "default" running at: http://localhost:8080
INFO 2016-01-25 02:31:53,839 admin_server.py:116] Starting admin server at: http://localhost:8000
ERROR 2016-01-25 02:31:55,105 go_runtime.py:179] Failed to build Go application: (Executed command: /Users/moon/go_appengine/goroot/bin/go-app-builder -app_base /Users/moon/Downloads/d1/d2 -arch 6 -dynamic -goroot /Users/moon/go_appengine/goroot -nobuild_files ^^$ -unsafe -gopath /Users/moon/gopath:/Users/moon/go_appengine/goroot -binary_name _go_app -extra_imports appengine_internal/init -work_dir /var/folders/75/xlk18qb10231fqxqmdwg_xhm0000gn/T/tmpf1CeKbappengine-go-bin -gcflags -I,/Users/moon/go_appengine/goroot/pkg/darwin_amd64_appengine -ldflags -L,/Users/moon/go_appengine/goroot/pkg/darwin_amd64_appengine beta.go main.go)
/var/folders/75/xlk18qb10231fqxqmdwg_xhm0000gn/T/tmpf1CeKbappengine-go-bin/beta.go:9: import /Users/moon/go_appengine/goroot/pkg/darwin_amd64/appengine.a: object is [darwin amd64 go1.5.2 X:none] expected [darwin amd64 go1.4.2 (appengine-1.9.31) X:precisestack]
2016/01/25 04:31:55 go-app-builder: build timing: 1×6g (11ms total), 0×6l (0 total)
2016/01/25 04:31:55 go-app-builder: failed running 6g: exit status 1
^Cgoapp: caught SIGINT, waiting for dev_appserver.py to shut down
INFO 2016-01-25 02:32:19,423 shutdown.py:45] Shutting down.
INFO 2016-01-25 02:32:19,423 api_server.py:648] Applying all pending transactions and saving the datastore
INFO 2016-01-25 02:32:19,423 api_server.py:651] Saving search indexes
➜ dev git:(master) ✗ goapp version
go version go1.4.2 (appengine-1.9.31) darwin/amd64
➜ dev git:(master) ✗ go version
go version go1.5.2 darwin/amd64
➜ dev git:(master) ✗
➜ dev git:(master) ✗echo $GOPATH
/Users/moon/gopath:/Users/moon/go_appengine/goroot

Related

AWS Amplify Build Time Out Error: got stuck at Frontend while running gulpfile.js

I moved from manual deployment to automatic CI/CD with my Github repo, In manual deployment, it was working with out any issues. After connecting the main repo and starting the build, it's not getting completed at the Frontend provision.
I'm getting the build timeout error with 30 min default time setting and then increasing it to 120 min in environmental variables override. It's still taking so much time.
On my local machine: it just takes <5 min to build without any errors.Error log of amplify build page
I see from the build log that: after running commands in gulpfile.js, it's getting stuck.
Build Settings file:
version: 1
env:
variables:
VERSION_AMPLIFY: 8.3.0
backend:
phases:
preBuild:
commands:
- npm i -g #aws-amplify/cli#${VERSION_AMPLIFY}
build:
commands:
- '# Execute Amplify CLI with the helper script'
- amplifyPush --simple
frontend:
phases:
preBuild:
commands:
- yarn install
build:
commands:
- yarn run build
- node ./node_modules/gulp/bin/gulp.js
artifacts:
baseDirectory: build
files:
- '**/*'
cache:
paths:
- node_modules/**/*
Console Error Status

Cloud Build fails to build App Engine Python 3.8 app (due to pip bug?)

I have a number of Python 3.7 apps on Google App Engine standard, all building and deploying fine. I'm trying to upgrade some of them to the new Python 3.8 runtime, but when I try to deploy, they fail in Cloud Build.
It looks like they're hitting this open pip bug (more background). Odd that only the Python 3.8 runtime triggers this bug, though, and 3.7 builds fine.
Full log below. (Note that it's happening in Cloud Build, not my local machine, so I can't upgrade pip or otherwise change any of the commands or environment.) Anyone know how I can fix or work around this?
File upload done.
Updating service [default]...failed.
ERROR: (gcloud.beta.app.deploy) Error Response: [9] Cloud build 83e346a0-7e88-43dd-b89c-a4820526e4a1 status: FAILURE
Error ID: f8df99ad
Error type: INTERNAL
Error message: ... (setup.py): started
Building wheel for webapp2 (setup.py): finished with status 'done'
Created wheel for webapp2: filename=webapp2-3.0.0b1-py3-none-any.whl size=68362 sha256=9dd9f3ab6a55404492a88eb9a6bacb00faa37efafbc41f21a24d21cfba0eaea3
Stored in directory: /layers/google.python.pip/pipcache/wheels/55/e9/4d/76b030f418cac0bef4a3dcc15ca95c9671f1e826731ce2bc0f
Building wheel for tlslite-ng (setup.py): started
Building wheel for tlslite-ng (setup.py): finished with status 'done'
Created wheel for tlslite-ng: filename=tlslite_ng-0.7.5-py3-none-any.whl size=199869 sha256=b9ead00f0832041fba1e9d3883e57847995c2d6f83ecb7ea87d09cf82c730e8b
Stored in directory: /layers/google.python.pip/pipcache/wheels/a6/e1/a6/09610854c3405202d0b71d8f869811781e40cd26ffb85eacf8
Successfully built gdata humanize mf2py mf2util python-tumblpy ujson webapp2 tlslite-ng
Installing collected packages: six, ecdsa, tlslite-ng, lxml, gdata, certifi, urllib3, chardet, idna, requests, setuptools, protobuf, googleapis-common-protos, pyasn1, pyasn1-modules, rsa, cachetools, google-auth, pytz, grpcio, google-api-core, google-cloud-core, google-cloud-logging, gunicorn, pbr, extras, linecache2, traceback2, python-mimeparse, argparse, unittest2, testtools, fixtures, mox3, soupsieve, beautifulsoup4, gdata-python3, redis, google-cloud-datastore, google-cloud-ndb, humanize, MarkupSafe, jinja2, webencodings, html5lib, mf2py, mf2util, oauthlib, prawcore, websocket-client, update-checker, praw, requests-oauthlib, python-tumblpy, tweepy, ujson, webob, webapp2, oauth-dropins
Running setup.py develop for oauth-dropins
ERROR: Command errored out with exit status 1:
command: /opt/python3.8/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/workspace/setup.py'"'"'; __file__='"'"'/workspace/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps --home /tmp/pip-target-zp53suvg
cwd: /workspace/
Complete output (6 lines):
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: option --home not recognized
----------------------------------------
ERROR: Command errored out with exit status 1: /opt/python3.8/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/workspace/setup.py'"'"'; __file__='"'"'/workspace/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps --home /tmp/pip-target-zp53suvg Check the logs for full command output.
WARNING: You are using pip version 20.1.1; however, version 20.2.2 is available.
You should consider upgrading via the '/opt/python3.8/bin/python3 -m pip install --upgrade pip' command.
Full build logs: https://console.cloud.google.com/cloud-build/builds/83e346a0-7e88-43dd-b89c-a4820526e4a1?project=216076569502
Here's my requirements.txt file. I suspect the -e . might be the problem...but it works with Python 3.7, so if so, that's disappointing.
git+https://github.com/dvska/gdata-python3.git#egg=gdata
google-cloud-logging~=1.14
gunicorn~=20.0
mox3~=0.28
# this includes everything in setup.py's install_requires.
# https://caremad.io/posts/2013/07/setup-vs-requirement/#developing-reusable-things-or-how-not-to-repeat-yourself
-e .
I checked pypi page of oauth-dropins (at which it is failing) and they're mentioning there exactly this issue being caused by -e

What is the build error I'm running into while working through the GAE bookshelf tutorial?

For some broader context: I am wanting to move a react app that I have built and previously hosted through Firebase Hosting over to GAE to host it there so that I can connect to both my firebase and cloud postgresql databases (could not find a way to connect to cloud postgresql from my app hosted on Firebase).
Apologies in advance for the somewhat vague question, but I've run into several issues just working through the bookshelf app tutorial and have been able to work through some of the errors but have not been able to figure out the issue here:
ERROR: (gcloud.app.deploy) Error Response: [9] Cloud build eaf3e253-ee83-46d4-a640-fd7e68083a13 status: FAILURE.
Build error details: {"error":{"errorType":"BuildError","canonicalCode":"INVALID_ARGUMENT","errorId":"2BCB87EC","errorMessage":"INFO FTL version node-v0.17.0\n
INFO Beginning FTL build for node\n
INFO FTL arg passed: exposed_ports None\n
INFO FTL arg passed: cache_repository us.gcr.io/piva-primero/app-engine-tmp/build-cache/ttl-7d\n
INFO FTL arg passed: tar_base_image_path None\n
INFO FTL arg passed: export_cache_stats False\n
INFO FTL arg passed: builder_output_path \"\"\n
INFO FTL arg passed: name us.gcr.io/piva-primero/app-engine-tmp/app/ttl-2h:55db4aec-8e29-42ea-bea3-c457808a429c\n
INFO FTL arg passed: ttl 168\n
INFO FTL arg passed: global_cache False\n
INFO FTL arg passed: cache True\n
INFO FTL arg passed: upload True\n
INFO FTL arg passed: sh_c_prefix False\n
INFO FTL arg passed: fail_on_error True\n
INFO FTL arg passed: base us.gcr.io/gae-runtimes/nodejs10:nodejs10_20191019_10_16_3_RC00\n
INFO FTL arg passed: output_path None\n
INFO FTL arg passed: cache_key_version v0.17.0\n
INFO FTL arg passed: cache_salt \n
INFO FTL arg passed: directory /workspace\n
INFO FTL arg passed: entrypoint None\n
INFO FTL arg passed: additional_directory /.gaeconfig\n
INFO FTL arg passed: destination_path /srv\n
INFO FTL arg passed: verbosity NOTSET\n
INFO starting: full build\n
INFO starting: builder initialization\n
INFO Loading Docker credentials for repository 'us.gcr.io/gae-runtimes/nodejs10:nodejs10_20191019_10_16_3_RC00'\n
INFO Loading Docker credentials for repository 'us.gcr.io/piva-primero/app-engine-tmp/app/ttl-2h:55db4aec-8e29-42ea-bea3-c457808a429c'\n
INFO builder initialization took 0 seconds\n
INFO starting: build process for FTL image\n
INFO starting: rm_node_modules\n
INFO rm_node_modules rm -rf /workspace/node_modules\n
INFO `rm_node_modules` stdout:\n\n
INFO rm_node_modules took 0 seconds\n
INFO using descriptor:package-lock.json\n
INFO using descriptor:package.json\n
INFO starting: checking_cached_packages_json_layer\nDEBUG Checking cache for cache_key ddd19bc8f86cc8fedfd69dfce5aac6d21a6e2024dec42d6d3a96af6fc7a78dbd\n
INFO No cached base image found for entry: us.gcr.io/piva-primero/app-engine-tmp/build-cache/ttl-7d/node-cache:ddd19bc8f86cc8fedfd69dfce5aac6d21a6e2024dec42d6d3a96af6fc7a78dbd.\n
INFO Cache miss on local cache for us.gcr.io/piva-primero/app-engine-tmp/build-cache/ttl-7d/node-cache:ddd19bc8f86cc8fedfd69dfce5aac6d21a6e2024dec42d6d3a96af6fc7a78dbd\n
INFO No cached dependency layer for ddd19bc8f86cc8fedfd69dfce5aac6d21a6e2024dec42d6d3a96af6fc7a78dbd\n
INFO [CACHE][MISS] v0.17.0:NODE-\u003eddd19bc8f86cc8fedfd69dfce5aac6d21a6e2024dec42d6d3a96af6fc7a78dbd\n
INFO checking_cached_packages_json_layer took 0 seconds\n
INFO starting: building_packages_json_layer\n
INFO starting: npm_install\n
INFO npm_install npm install --production\n
INFO `npm_install` stdout:\n\n\u
003e grpc#1.7.3 install /workspace/node_modules/#google-cloud/video-intelligence/node_modules/grpc\n\u
003e node-pre-gyp install --fallback-to-build --library=static_library\n\n
make: Entering directory '/workspace/node_modules/#google-cloud/video-intelligence/node_modules/grpc/build'\n
make: Entering directory '/workspace/node_modules/#google-cloud/video-intelligence/node_modules/grpc/build'\n
CC(target) Release/obj.target/grpc/deps/grpc/src/core/lib/surface/init.o\n
CC(target) Release/obj.target/grpc/deps/grpc/src/core/lib/surface/init.o\n
grpc.target.mk:405: recipe for target 'Release/obj.target/grpc/deps/grpc/src/core/lib/surface/init.o' failed\n
make: Leaving directory '/workspace/node_modules/#google-cloud/video-intelligence/node_modules/grpc/build'\n
CC(target) Release/obj.target/grpc/deps/grpc/src/core/lib/channel/channel_args.o\n
Failed to execute '/usr/bin/node /usr/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js build --fallback-to-build --library=static_library --module=/workspace/node_modules/#google-cloud/video-intelligence/node_modules/grpc/src/node/extension_binary/node-v64-linux-x64-glibc/grpc_node.node --modul.
I'm trying to run the tutorial code on my machine. I've been able to successfully deploy a different app in another project but have had some real trouble with this one. Any tips for resolving this error would be very helpful.

Go Appengine Managed VM issue: unknown flag -trimpath

I am attempting to get a go app up on appengine using managed vms. As far as I can tell I have docker running locally fine and all the dependencies fulfilled but when I try and serve it locally I run into the following error:
INFO 2015-03-05 22:21:14,917 containers.py:280] /goroot/pkg/tool/linux_amd64/6g: unknown flag -trimpath
INFO 2015-03-05 22:21:14,922 containers.py:280] 2015/03/05 22:21:14 go-app-builder: build timing: 1×6g (5ms total), 0×gopack (0 total), 0×6l (0 total)
INFO 2015-03-05 22:21:14,923 containers.py:280] 2015/03/05 22:21:14 go-app-builder: failed running 6g: exit status 1
Running MacOs & boot2docker. Have always been able to run appengine locally without issue so I assume this has something to do with docker / the go version there or something else goofy. Would be super grateful to anyone that can point me in the right direction. Doesn't seem to be much out there on this one.
Full trace is below:
➜ appengine-try-go gcloud preview app run ./app.yaml
Module [default] found in file [/Users/markhayden/google-cloud-sdk/platform/google_appengine/goroot/src/appengine-try-go/app.yaml]
INFO: Looking for the Dockerfile in /Users/markhayden/google-cloud-sdk/platform/google_appengine/goroot/src/appengine-try-go
INFO: Using Dockerfile found in /Users/markhayden/google-cloud-sdk/platform/google_appengine/goroot/src/appengine-try-go
INFO 2015-03-05 22:21:13,424 devappserver2.py:726] Skipping SDK update check.
INFO 2015-03-05 22:21:13,485 api_server.py:172] Starting API server at: http://localhost:63533
INFO 2015-03-05 22:21:13,521 vm_runtime_proxy_go.py:107] Starting Go VM Deployment process
INFO 2015-03-05 22:21:13,521 dispatcher.py:186] Starting module "default" running at: http://localhost:8080
INFO 2015-03-05 22:21:13,524 admin_server.py:118] Starting admin server at: http://localhost:8000
INFO 2015-03-05 22:21:13,545 containers.py:259] Building docker image whiskey-tango-foxtrot.default.1 from /var/folders/lv/9hzw2s5d25v17j4wph5pl46c0000gn/T/tmpFreenWgo_deployment_dir/Dockerfile:
INFO 2015-03-05 22:21:13,545 containers.py:261] -------------------- DOCKER BUILD --------------------
INFO 2015-03-05 22:21:14,312 containers.py:280] ---> 3b6b889b2273
INFO 2015-03-05 22:21:14,312 containers.py:280] Step 1 : ADD . /app
INFO 2015-03-05 22:21:14,561 containers.py:280] ---> b994e021ab2e
INFO 2015-03-05 22:21:14,570 containers.py:280] Removing intermediate container 332c78e9be28
INFO 2015-03-05 22:21:14,571 containers.py:280] Step 2 : RUN /bin/bash /app/_ah/build.sh
INFO 2015-03-05 22:21:14,676 containers.py:280] ---> Running in 7e4157c8b5b2
INFO 2015-03-05 22:21:14,905 containers.py:280] b02fde1ce30965d84e52f461de37655580e16956 /app/_ah/gab
INFO 2015-03-05 22:21:14,917 containers.py:280] /goroot/pkg/tool/linux_amd64/6g: unknown flag -trimpath
INFO 2015-03-05 22:21:14,922 containers.py:280] 2015/03/05 22:21:14 go-app-builder: build timing: 1×6g (5ms total), 0×gopack (0 total), 0×6l (0 total)
INFO 2015-03-05 22:21:14,923 containers.py:280] 2015/03/05 22:21:14 go-app-builder: failed running 6g: exit status 1
ERROR 2015-03-05 22:21:15,097 containers.py:283] The command [/bin/sh -c /bin/bash /app/_ah/build.sh] returned a non-zero code: 1
INFO 2015-03-05 22:21:15,097 containers.py:292] --------------------------------------------------------
INFO 2015-03-05 22:21:15,098 vm_runtime_proxy_go.py:133] Go VM Deployment process failed: Docker build aborted: The command [/bin/sh -c /bin/bash /app/_ah/build.sh] returned a non-zero code: 1
ERROR 2015-03-05 22:21:15,098 instance.py:280] Docker build aborted: The command [/bin/sh -c /bin/bash /app/_ah/build.sh] returned a non-zero code: 1
INFO 2015-03-05 22:21:15,098 health_check_service.py:101] Health checks starting for instance 0.
For anyone looking, more verbose output can be found here: https://gist.github.com/markhayden/1090aa3c232f56788a1f
Update
Now also getting 2015/03/12 07:34:09 Can't find package "appengine" in $GOPATH: cannot find package "appengine" in any of: when trying to fire things up. Looks like the comment below might have solved the trimpath issue but now this one is stopping me from confirming. Anyone able to help me confirm how I should be setting my gopath / goroot to get this one resolved? Also, its unclear if its trying to locate the missing packages in the docker container or locally on my machine?
INFO 2015-03-12 07:34:11,311 containers.py:280] b02fde1ce30965d84e52f461de37655580e16956 /app/_ah/gab
INFO 2015-03-12 07:34:11,359 containers.py:280] 2015/03/12 07:34:09 Can't find package "appengine" in $GOPATH: cannot find package "appengine" in any of:
INFO 2015-03-12 07:34:11,360 containers.py:280] /goroot/src/appengine (from $GOROOT)
INFO 2015-03-12 07:34:11,364 containers.py:280] /gopath/src/appengine (from $GOPATH)
INFO 2015-03-12 07:34:11,720 containers.py:280] /tmp/work/main.go:4: can't find import: "appengine"
INFO 2015-03-12 07:34:11,721 containers.py:280] 2015/03/12 07:34:09 go-app-builder: build timing: 3×6g (355ms total), 0×gopack (0 total), 0×6l (0 total)
INFO 2015-03-12 07:34:11,722 containers.py:280] 2015/03/12 07:34:09 go-app-builder: failed running 6g: exit status 1
ERROR 2015-03-12 07:34:11,937 containers.py:283] The command [/bin/sh -c /bin/bash /app/_ah/build.sh] returned a non-zero code: 1
I posted about this on the google-appengine-go list, the solution is to add the following line to your Dockerfile, at least until the base image is updated.
RUN rm -rf /goroot && mkdir /goroot && curl https://storage.googleapis.com/golang/go1.4.2.linux-amd64.tar.gz | tar xvzf - -C /goroot --strip-components=1

Why does Capistrano 3 not filter when using "on roles(:web)"?

I'm not sure if this is just a misunderstanding on Capistrano and Rake on my part, but I'm going through the Capistrano 3 set up pages (www.capistranorb.com) and a few of the steps mention how to write up the servers and how to run the basic task of :check_write_permissions.
My problem is when I try to run the example code on my servers using the following server setup, I get unexpected results.
I have my config/deploy/production.rb file set up as such:
server '10.1.28.90', roles: [:web, :app]
server '10.1.246.239', roles: [:db]
I then created the :check_write_permissions task in lib/capistrano/tasks/access_check.rake. I made one small modification to the "on roles(:all)" so it would instead be "on roles(:web)".
desc "Check that we can access everything"
task :check_write_permissions do
on roles(:web) do |host|
if test("[ -w #{fetch(:deploy_to)} ]")
info "#{fetch(:deploy_to)} is writable on #{host}"
else
error "#{fetch(:deploy_to)} is not writable on #{host}"
end
end
end
When I run the task:
cap production check_write_permissions
OR
bundle exec cap production check_write_permissions
... I am expecting that it will only run the :check_write_permissions code against the servers with a :web role. Instead, my output show that the :db server is also run against with the :check_write_permissions task. This throws exceptions because I do not have a deploy directory on the database server.
DEBUG[90f77252] Running /usr/local/rvm/bin/rvm version on 10.1.246.239
DEBUG[90f77252] Command: /usr/local/rvm/bin/rvm version
DEBUG[fa4e93ec] Running /usr/local/rvm/bin/rvm version on 10.1.28.90
DEBUG[fa4e93ec] Command: /usr/local/rvm/bin/rvm version
DEBUG[90f77252] bash: /usr/local/rvm/bin/rvm: No such file or directory
DEBUG[fa4e93ec] rvm 1.25.28 (stable) by Wayne E. Seguin <wayneeseguin#gmail.com>, Michal Papis <mpapis#gmail.com> [https://rvm.io/]
DEBUG[fa4e93ec] Finished in 1.060 seconds with exit status 0 (successful).
rvm 1.25.28 (stable) by Wayne E. Seguin <wayneeseguin#gmail.com>, Michal Papis <mpapis#gmail.com> [https://rvm.io/]
DEBUG[3583646b] Running /usr/local/rvm/bin/rvm current on 10.1.28.90
DEBUG[3583646b] Command: /usr/local/rvm/bin/rvm current
DEBUG[3583646b] ruby-2.0.0-p481
DEBUG[3583646b] Finished in 0.286 seconds with exit status 0 (successful).
ruby-2.0.0-p481
DEBUG[b91aa735] Running /usr/local/rvm/bin/rvm 2.0.0-p481 do ruby --version on 10.1.28.90
DEBUG[b91aa735] Command: /usr/local/rvm/bin/rvm 2.0.0-p481 do ruby --version
DEBUG[b91aa735] ruby 2.0.0p481 (2014-05-08 revision 45883) [x86_64-linux]
DEBUG[b91aa735] Finished in 0.400 seconds with exit status 0 (successful).
ruby 2.0.0p481 (2014-05-08 revision 45883) [x86_64-linux]
cap aborted!
SSHKit::Runner::ExecuteError: Exception while executing on host 10.1.246.239: rvm exit status: 127
rvm stdout: Nothing written
rvm stderr: bash: /usr/local/rvm/bin/rvm: No such file or directory
When I run the task with a ROLE filter in the command I execute, such as:
ROLES=web cap production check_write_permissions
This works as expected. I see only the web server get the task run against it.
DEBUG[7974b8ee] Running /usr/local/rvm/bin/rvm version on 10.1.28.90
DEBUG[7974b8ee] Command: /usr/local/rvm/bin/rvm version
DEBUG[7974b8ee] rvm 1.25.28 (stable) by Wayne E. Seguin <wayneeseguin#gmail.com>, Michal Papis <mpapis#gmail.com> [https://rvm.io/]
DEBUG[7974b8ee] Finished in 1.062 seconds with exit status 0 (successful).
rvm 1.25.28 (stable) by Wayne E. Seguin <wayneeseguin#gmail.com>, Michal Papis <mpapis#gmail.com> [https://rvm.io/]
DEBUG[23f666d8] Running /usr/local/rvm/bin/rvm current on 10.1.28.90
DEBUG[23f666d8] Command: /usr/local/rvm/bin/rvm current
DEBUG[23f666d8] ruby-2.0.0-p481
DEBUG[23f666d8] Finished in 0.297 seconds with exit status 0 (successful).
ruby-2.0.0-p481
DEBUG[7ae64240] Running /usr/local/rvm/bin/rvm 2.0.0-p481 do ruby --version on 10.1.28.90
DEBUG[7ae64240] Command: /usr/local/rvm/bin/rvm 2.0.0-p481 do ruby --version
DEBUG[7ae64240] ruby 2.0.0p481 (2014-05-08 revision 45883) [x86_64-linux]
DEBUG[7ae64240] Finished in 0.387 seconds with exit status 0 (successful).
ruby 2.0.0p481 (2014-05-08 revision 45883) [x86_64-linux]
DEBUG[c0ebccc0] Running /usr/bin/env [ -w /data/union_benefits/ ] on 10.1.28.90
DEBUG[c0ebccc0] Command: [ -w /data/union_benefits/ ]
DEBUG[c0ebccc0] Finished in 0.126 seconds with exit status 1 (failed).
ERROR/data/union_benefits/ is not writable on 10.1.28.90
What is the reason behind this? I have dug through the Capistrano 3.2.1 code a bit and I cannot figure this out. Maybe this is just a misunderstanding on my part of how the roles(...) work, but I can't figure it out.

Resources