How to authenticate a private Go Module using go 1.11 and Google App Engine Standard - google-app-engine

I've been updating my entire go gae standard project to use go 1.11's modules.
Main directory structure
app.yaml
app.go
go.mod
go.sum
app.go
package main
import "bitbucket.org/myPrivateRepo"
func main() {
myImportantModule.Run()
}
go.mod
module myProject
require bitbucket.org/myPrivateRepo v0.0.1
The Error
If I try to gcloud app deploy:
ERROR: (gcloud.app.deploy) Error Response: [9] Cloud build <GUI>
status: FAILURE.
Build error details: go: bitbucket.org/myPrivateRepo#v0.0.1:
https://api.bitbucket.org/2.0/repositories/myPrivateRepo?fields=scm:
403 Forbidden
(Note: obviously the repo I'm using has a real name).
So can I do it this way? I'll admit to not fully understanding the migration documentation, particularly when it talked about "Moving files to your GOPATH". https://cloud.google.com/appengine/docs/standard/go111/go-differences
I mean, I thought one of the benefits of the new module system is that you don't need everything under the go path. When I read https://github.com/golang/go/wiki/Modules for example, it very early on says "Create a directory outside of your GOPATH:"
So, to be clear, right now all of my code is outside the go path, but everything builds locally just fine.
I think it all works becausego automatically downloads and caches things within the go path when I run go mod tidy / go build etc.
Yet it fails when I try to gcloud app deploy. How would the google cloud build system ever have access to my private repositories anyway? I'm obviously missing something important. I also read you are not supposed to combine vendoring with the new module system so that can't be it.
I will be very happy if this works, as using DEP forced me to use goapp deploy very awkwardly.
Thanks!

UPDATE: Google has some better documentation now that go 1.14 is out: https://cloud.google.com/appengine/docs/standard/go/specifying-dependencies
My solution:
Instead of dealing with credentials, I'm using go's module replace functionality to point GAE to use my local code. This is working well.
Directory structure:
myService/
src/
service.go // has a run() function to set up routers etc.
go.mod // depends on my private module in bitbucket and other things
… // other source files
build/
gae/
src/ // simlink to ../../src
modules/ // git ignored, I clone or copy my modules in build scripts.
app.go // see below…
go.mod // has main() which calls service.run() and appEngine.Main()
app.yaml
Method
I use git module replace so that GAE uses my local code. Before building I parse myService/src/go.mod to find the correct version of my private module, then I clone it into the modules folder. I also made an option to copy wip module source code for debugging locally without committing to my module repositories.
go.mod from gae directory:
module myServiceGAE
require (
bitbucket.org/me/myService v0.0.0
google.golang.org/appengine v1.4.0
)
replace bitbucket.org/me/myService => ./src
replace bitbucket.org/me/myModule => ./modules/utils
Pros
The package under myService has no references or knowledge of GAE, so I can easily build it into a docker etc. I think parsing the service go.mod files would be like creating my own dependency manager, defeating the benefits of go modules.
Cons
If I had a private module which depended on another private module, I think things would get too complicated.

Another alternative is to also use Google Cloud Secret Manager
https://cloud.google.com/cloud-build/docs/access-private-github-repos
Google Cloud will have an SSH key to access and pull your private repository.

Set git credentials before deploying:
git config credential.helper '!f() { sleep 1; echo "username=${GIT_USER}\npassword=${GIT_PASSWORD}"; }; f'
export GIT_USER=put_git_user_here
export GIT_PASSWORD=put_git_password_here
gcloud app deploy

Related

Vespa Tutorial – HTTP API use-case fails to activate with IllegalArgumentException

I'm currently following the Vespa tutorials, and ran into an issue with the HTTP API use-case. Everything works fine from the mvn install package to the vespa-deploy prepare target/application.zip.
The call to vespa-deploy activate returns normally, but the application then never gets available on localhost:8080. Looking at /opt/vespa/logs/vespa/vespa.log (in the VM) one finds the following stack trace:
Container.com.yahoo.jdisc.core.StandaloneMain error Unexpected:
exception=
java.lang.IllegalArgumentException: Could not create a component with id 'com.mydomain.demo.DemoComponent'.
Tried to load class directly, since no bundle was found for spec: sample-app-http-api-searcher.
If a bundle with the same name is installed, there is a either a version mismatch or the installed bundle's version contains a qualifier string.
at com.yahoo.osgi.OsgiImpl.resolveFromClassPath(OsgiImpl.java:48)
...
This occurred using a fresh Docker image with a clean clone of the sample-apps git repository. Preparing and activating the basic sample as well as the other http example did work seamlessly.
I checked the sources and the xml files for obvious problems but don't have any clue about what is failing and where.
target/application.zip contains
application/components/http-api-using-searcher-1.0.1-deploy.jar
application/hosts.xml
application/searchdefinitions/basic.sd
application/services.xml
And the jar itself does contain a com/mydomain/demo/DemoComponent.class file (among other things).
Potentially related issue on the github tracker: https://github.com/vespa-engine/vespa/issues/3479 I'll be posting a link to this question there as well, but I still think it's worth a SO question, at least to get some action behind the vespa tag :)
The bundle id in the application's services.xml file was wrong. Please pull the application from git and try again now. See also PR: https://github.com/vespa-engine/sample-apps/pull/18
Brief explanation: The bundle id given in the bundle="<id>" declaration in services.xml must match the 'Bundle-SymbolicName' in the bundle's manifest. When the bundle has been built with the Vespa bundle-plugin, the symbolic name is by default the same as the project's artifactId. Hence, in most cases you just have to verify that the bundle id matches the artifactId.

Google Go AppEngine imports and conflicts when serving / testing

So I have spent the better part of two days trying to figure this one out and no matter what I do I can't get things straightened out. Here is what is going on:
Using Go and Appengine. I am running into issues when trying to
get proper unit tests working.
I have tried lots of structures but here is a sample of where I am now: https://github.com/markhayden/SampleIssue
I am running into dependency issues in either goapp serve or goapp test -v ./src/lib1 depending on how I have my import paths set.
If I use "src/lib1" for my import path and then goapp serve. My app boots and runs fine, but when I run tests I get the following failure:
src/lib1/lib1.go:5:2: cannot find package "src/lib2" in any of:
/Users/USERNAME/go_appengine/goroot/src/pkg/src/lib2 (from $GOROOT)
/Users/markhayden/Projects/go/src/src/lib2 (from $GOPATH)
Likewise, if I use "dummy/src/lib1" as my path, my tests are happy and run fine but upon goapp serve ing the app I now get:
2014/11/06 20:33:34 go-app-builder: Failed parsing input: app file lib1.go conflicts with same file imported from GOPATH
Have fiddled with all sorts of different options and can't figure out how to handle dependencies and still have solid testing. Maybe its a appengine / golang bug? Or am I missing something?
Any help would be very much appreciated. Thanks in advance!
Updated everything based on first comment feedback. I can run tests (as I was able to do before) but I still can not serve the app. Here is what I get when running goapp serve
INFO 2014-11-07 17:24:48,727 devappserver2.py:745] Skipping SDK update check.
INFO 2014-11-07 17:24:48,748 api_server.py:172] Starting API server at: http://localhost:60732
INFO 2014-11-07 17:24:48,751 dispatcher.py:185] Starting module "default" running at: http://localhost:8080
INFO 2014-11-07 17:24:48,754 admin_server.py:118] Starting admin server at: http://localhost:8000
ERROR 2014-11-07 17:24:49,041 go_runtime.py:171] Failed to build Go application: (Executed command: /Users/markhayden/go_appengine/goroot/bin/go-app-builder -app_base /Users/markhayden/Projects/go/src/github.com/markhayden/SampleIssue -arch 6 -dynamic -goroot /Users/markhayden/go_appengine/goroot -nobuild_files ^^$ -unsafe -gopath /Users/markhayden/Projects/go -print_extras_hash lib1/lib1.go lib2/lib2_test.go main_test.go main.go lib1/lib1_test.go lib2/lib2.go)
2014/11/07 09:24:49 go-app-builder: Failed parsing input: app file lib2.go conflicts with same file imported from GOPATH
$GOPATH = /Users/markhayden/Projects/go
$GOROOT = not set (according to docs it doesnt need to be if you dont use a custom directory)
App Structure:
$GOPATH/src/github.com/markhayden/SampleIssue/
- app.yaml
- /lib1
- lib1_test.go
- lib1.go
- /lib2
- lib2_test.go
- lib2.go
- main_test.go
- main.go
In main.go:
import (
"fmt"
"github.com/markhayden/SampleIssue/lib1"
"net/http"
)
In lib1/lib1.go:
import (
"fmt"
"github.com/markhayden/SampleIssue/lib2"
)
Appengine "conflicts with same file imported from GOPATH" issue:
Appengine is importing things underneath the root directory (i.e. where the app.yaml is). This will cause two imports, one by appengine when it scans the directories, and a second by your source when it is explicitly imported.
You have two choices:
Don't use the full import path (for sub-folder packages) with appengine.
Remove the source repository part of import. So instead of
"github.com/blah/blah" it would be "blah/blah".
Note: This kinda sucks as it makes your build and software appengine specific. You could make this a little better -maybe- by using build constraints. e.g. +build !appengine or
+build !appengine to include/remove certain files from the build depending on if you are targeting appengine.
Move your modules/dependencies (sub-folders) to a separate and independent project to make it work with the full path import convention:
Get rid of all directories / dependencies in the main project (where
your app.yaml is), so that appengine can't scan and find them.
Move them to another independent project (I did SampleIssueDeps)
with no app.yaml that is not a sub-directory (e.g.
/MarkHayden/SampleIssueDeps).
Then pull those dependencies via
full path import. e.g. github.com/MarkHayden/SampleIssueDeps/lib1.
Summary: For sub-folder packages in an appengine project don't include the "source repository" part of the import path OR only use appengine to init() and move all of your other code to separate projects and use like external dependencies.
I came up with another option that isn't discussed here and in my opinion is much easier to deal with (and keep your app less appengine specific). Lets say you have the repo at github.com/blah/blah and right now the root folder of the repo defines your app engine server.
First, move the app.yaml and other app engine specific files (NOT .go files) into github.com/blah/blah/appengine/app.yaml.
Next, wherever you run your init function for app engine, rename it to something like func Run() { ... }, and then in github.com/blah/blah/whatever.go write something like this:
package appengine
import "github.com/blah/blah"
func init() {
blah.Run()
}
From my experience this has resolved the issue and made things much easier. I'll update this if I run into any major issues that make this a bad solution.
I had a lot of trouble following various answers and understanding how to solve the problem.
But after a lot of research, I believe I understand both cause and solution:
Google app-builder tooling does some path-munging and is causing this.
They are aware of the bug but no ETA to fix it.
Problem summary:
any .go files inside or below the directory holding main.go / app.yaml will be double imported…
In summary, just make sure that ALL of our files/packages are siblings and not decedents of the directory holding those two files...

gruntjs / angularjs - optional development config?

Like most js web apps we have a config.js file that contains global config information about the app, base api urls and such. These values are often different in local development than in production.
I've looked at answers like: Development mode for AngularJS using GruntJS, and also things like grunt-replace for creating an on-the-fly config file.
My issue is that the "development" part varies from developer to developer, we all need a version of the API setup so the base api urls will be different. I'd like to allow each developer to override specific variables in the config in a way that doesn't require them to commit that info to the git repo (I agree that this isn't best practice, everything should be in the repo, but as this is only 1/2 variables for this project I can overlook it)
Any ideas on how to achieve this setup?
You can use grunt-preprocess. I would have production (and dev-server, etc) values in a file, say env.json. You could use grunt to look for an optional file, say overrides.json or developer.json, which would extend/overwrite the values from env.json.
var envFile = require('./env.json');
You can create command line options to grunt with grunt.option, e.g. var env = grunt.option('env') || undefined;, which could be used to turn off overriding.
You can get data from the optional file using fs.existsSync:
var fs = require('fs');
var developerFile;
if (fs.existsSync('./developer.json')) {
developerFile = require('./developer.json');
}
The simplest way to define the grunt-preprocess context would be to use the developer.json file if present, or the env.json file if not:
context: developerFile ? developerFile : envFile;
This requires the developer file to be complete. An alternative is to extend the envFile with options from developerFile if it's present.
In my project, we use different config files (which are basically files with JS object). So every developer has his app/configs/developer/config.js file, which is not comited in the source control, so every developer has his own setup. Project uses link to app/scripts/config.js by default and this file is just a soft link to developers config file. However, there are also app/configs/staging/config.js and app/configs/production/config.js files, which are replaced when using gruntjj to build project. Those configs are just copied to build solution instead of soft linked file.
I hope that makes sense..

Sencha Cmd v3 build error when implementing Bryntum Scheduler

Using Cmd 3.0.0.141, I have successfully generated a workspace and an Ext app in that workspace. The application builds correctly until I attempt to integrate the Bryntum Scheduler, where I encounter an error when I try to build:
"Failed to resolve dependency Sch.panel.SchedulerTree for file ExtCalendar.view.Tree"
the app is very simple at this point, uses Ext.application and follows the MVC pattern where I have a view defined "ExtCalendar.view.Tree" that extends 'Sch.panel.SchedulerTree". I also have models and stores that extend Bryntum classes as well, so I assume the compiler will trip over those as well, since it can't see the Sch namespace.
I've added a 'js' path to my app.json that points to the bryntum js file where 'Sch.panel.SchedulerTree' comes from. I've tried to run the 'refresh' command with the same results (Failed to resolve...). I've regenerated the bootstrap.js file manually using 'compile', but nothing from the Sch namespace ever gets added to it, despite the Brytum lib file being in the classpath.
What do I need to do in order to successfully run the 'build' command with libs like this?
Or, do I need to take a more granular approach using the 'compile' command?
With the help of the nice folks on the Sencha forums, I was able to resolve my build issues. The solution, for me, involved a shim. I added an external shim.js file to my index with as many //#require and //#define directives as needed in order to resolve the dependency issues.
According to the nice folks at Bryntum, once I upgrade from the free-trial version of the Bryntum Scheduler, I will be able to get rid of the shim and simply rely on the sencha.cfg classpath pointing at the Bryntum src.
Also, as an aside, the app.json file is not used in ExtJS apps, its inclusion in the generated files was a bug in build 141 of Cmd v3.
See this thread for more detail.

jTwitter, oAuth, and Google App Engine. NoClassDefFoundError

I'm trying to use jTwitter to get an oauth instance to twitter with my consumer key/secret and access token/secret. This is well documented in the javadoc here. I have downloaded signpost, signpost-jetty, and the jtwitter library, but after deploying and running the servlet, I get a error java.lang.NoClassDefFoundError: winterwell/jtwitter/OAuthSignpostClient Eclipse isn't complaining about the class not being there, because it is there-- I can see it in the JAR file itself, which is in my project. So, I said forget it, I'll try out OAuthScribeClient instead, but this generated a VERY SIMILAR ERROR java.lang.NoClassDefFoundError: org/scribe/oauth/Token This one confuses me even further because I have the following code in my java file, and it compiles without error or warning:
import org.scribe.oauth.Token;
Token token = new Token("myaccesstokeninfo", "accesstokensecret");
Clearly, I'm missing something very fundamental, but I am at an absolute loss as to what it may be. Thanks.
Usually "NoClassDefFoundError" happens when you forget to copy all jar-files to your "/war/WEB-INF/lib" directory, so those libs will be unavailable from server-side.
Xo4yHaMope is probably right.
If you're working from Eclipse but running using a web container, then your runtime classpath might be different from your project classpath - which can cause this error.
In order to complete Ben Winters answer what I actually did and worked is add the jar in
the libs folder within the project
see also here about folder hierarchy.
When you do this eclipse will normally add the jar to the android dependencies before launching the application. What I realise is that adding a jar in the build path will make classes available only during the build

Resources