Parse multipart form on Google App Engine - google-app-engine

A project I'm working on depends on having a service hosted on Google App Engine parse from SendGrid. The following code is an example of what we're doing:
package sendgrid_failure
import (
"net/http"
"fmt"
"google.golang.org/appengine"
"google.golang.org/appengine/log"
)
func init() {
http.HandleFunc("/sendgrid/parse", sendGridHandler)
}
func sendGridHandler(w http.ResponseWriter, r *http.Request) {
ctx := appengine.NewContext(r)
err := r.ParseMultipartForm(-1)
if err != nil {
log.Errorf(ctx, "Unable to parse form: %v", err)
}
fmt.Fprint(w, "Test.")
}
When SendGrid POSTs its multipart form, the console shows similar to the following:
2018/01/04 23:44:08 ERROR: Unable to parse form: open /tmp/multipart-445139883: no file writes permitted on App Engine
App Engine doesn't allow you to read/write files, but Golang appears to need it to parse. Is there an App Engine specific library to parse multipart forms, or should we be using a different method from the standard net/http library entirely? We're using the standard go runtime.

The documentation for ParseMultipartForm says:
The whole request body is parsed and up to a total of maxMemory bytes of its file parts are stored in memory, with the remainder stored on disk in temporary files.
The server attempts to write all files to disk because the application passed -1 as maxMemory. Use a value larger than the size of the files you expect to upload.

Related

Stackdriver Trace with Google App Engine Go 1.11 runtime

I would like to get Stackdriver Trace to work on the new Go 1.11 standard runtime on Google App Engine. This is all still labeled as beta so maybe it just doesn't quite work yet. However, I followed the step-by-step directions and it isn't working. I deployed the code (almost) exactly as it is listed in the link and my traces are flat (i.e. doesn't include the waterfall view I would expect with the incoming request at the top and the outgoing request nested underneath).
Sample code
package main
import (
"fmt"
"log"
"net/http"
"os"
"contrib.go.opencensus.io/exporter/stackdriver"
"contrib.go.opencensus.io/exporter/stackdriver/propagation"
"go.opencensus.io/plugin/ochttp"
"go.opencensus.io/trace"
)
func main() {
// Create and register a OpenCensus Stackdriver Trace exporter.
exporter, err := stackdriver.NewExporter(stackdriver.Options{
ProjectID: os.Getenv("GOOGLE_CLOUD_PROJECT"),
})
if err != nil {
log.Fatal(err)
}
trace.RegisterExporter(exporter)
client := &http.Client{
Transport: &ochttp.Transport{
// Use Google Cloud propagation format.
Propagation: &propagation.HTTPFormat{},
},
}
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
req, _ := http.NewRequest("GET", "https://www.google.com/robots.txt", nil)
// The trace ID from the incoming request will be
// propagated to the outgoing request.
req = req.WithContext(r.Context())
// The outgoing request will be traced with r's trace ID.
resp, err := client.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
fmt.Fprint(w, "OK")
})
http.Handle("/foo", handler)
log.Fatal(http.ListenAndServe(":"+os.Getenv("PORT"), &ochttp.Handler{}))
}
Sample trace:
As written in the reply comment in the original question, can you try the config for sampling?: trace.AlwaysSample()
You can find some comments about sampling rate in OpenCensus Trace documentation and godoc of OpenCensus Trace library:
By default, traces will be sampled relatively rarely. To change the sampling frequency for your entire program, call ApplyConfig. Use a ProbabilitySampler to sample a subset of traces, or use AlwaysSample to collect a trace on every run:

Cross platform go code for appengine

What is the GO appropriate way to create a FetchUrl/GetURL function that works from the command line and works from google app engine with its custom way to fetch a url.
I have basic code that fetches and processes some data on a URL. I want to be able to call it from code I use on my desktop, and code deployed to app engine.
Hopefully thats clear, if not please let me know and Ill clarify.
If you have some code which works both on local machine and on AppEngine environment, you have nothing to do.
If you need to do something which should or must be done differently on AppEngine, then you need to detect the environment and write different code for the different environments.
This detection and code selection is easiest done using build constraints. You can put a special comment line in the beginning of your .go file, and it may or may not be compiled and run depending on the environment.
Quoting from The Go Blog: The App Engine SDK and workspaces (GOPATH):
The App Engine SDK introduces a new build constraint term: "appengine". Files that specify
// +build appengine
will be built by the App Engine SDK and ignored by the go tool. Conversely, files that specify
// +build !appengine
are ignored by the App Engine SDK, while the go tool will happily build them.
So for example you can have 2 separate .go files, one for AppEngine and one for local (non-AppEngine) environment. Define the same function in both (with same parameter list), so no matter in which environment the code is built, the function will have one declaration. We will use this signature:
func GetURL(url string, r *http.Request) ([]byte, error)
Note that the 2nd parameter (*http.Request) is only required for the AppEngine (in order to be able to create a Context), so in the implementation for local env it is not used (can even be nil).
An elegant solution can take advantage of the http.Client type which is available in both the standard environment and in AppEngine, and which can be used to do an HTTP GET request. An http.Client value can be acquired differently on AppEngine, but once we have an http.Client value, we can proceed the same way. So we will have a common code that receives an http.Client and can do the rest.
Example implementation can look like this:
url_local.go:
// +build !appengine
package mypackage
import (
"net/http"
)
func GetURL(url string, r *http.Request) ([]byte, error) {
// Local GetURL implementation
return GetClient(url, &http.Client{})
}
url_gae.go:
// +build appengine
package mypackage
import (
"google.golang.org/appengine"
"google.golang.org/appengine/urlfetch"
"net/http"
)
func GetURL(url string, r *http.Request) ([]byte, error) {
// Appengine GetURL implementation
ctx := appengine.NewContext(r)
c := urlfetch.Client(ctx)
return GetClient(url, c)
}
url_common.go:
// No build constraint: this is common code
package mypackage
import (
"net/http"
)
func GetClient(url string, c *http.Client) ([]byte, error) {
// Implementation for both local and AppEngine
resp, err := c.Get(url)
if err != nil {
return nil, err
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return nil, err
}
return body, nil
}
You could get some clues in the golang/appengine project itself.
For instance, its remote_api/client.go provides the client for connecting remotely to a user's production application.

c.Infof undefined (type context.Context has no field or method Infof) google.golang.org/appengine/log error

In the Go Runtime i used the method c.Infof to log messages , but it fails to compile with the following error
c.Infof undefined (type context.Context has no field or method Infof) .
The Error clearly tells that the app engine context returned from c := appengine.NewContext(r) is of type context.Context and it doesnt have a method c.Infof on it. But contrary to this the documentation in https://godoc.org/google.golang.org/appengine/log suggests that this method exists . Another point to note , The method existed on the context returned by "appengine" (import "appengine" ) package , and this doesnt seem to exist on the context returned by the new package google.golang.org/appengine , what is c.Infof equivalent on the new Context of type context.Context returned by package "google.golang.org/appengine" ?
The example in the package documentation is not correct.
Use the log package functions to write to the App Engine log. Here's the corrected example:
c := appengine.NewContext(r)
query := &log.Query{
AppLogs: true,
Versions: []string{"1"},
}
for results := query.Run(c); ; {
record, err := results.Next()
if err == log.Done {
log.Infof(c, "Done processing results")
break
}
if err != nil {
log.Errorf(c, "Failed to retrieve next log: %v", err)
break
}
log.Infof(c, "Saw record %v", record)
}
The example in the package documentation was copied from the App Engine Classic package, but not updated to use the new functions. I suggest reporting this to the App Engine Team.

How to send 204 No Content with Go http package?

I built a tiny sample app with Go on Google App Engine that sends string responses when different URLs are invoked. But how can I use Go's http package to send a 204 No Content response to clients?
package hello
import (
"fmt"
"net/http"
"appengine"
"appengine/memcache"
)
func init() {
http.HandleFunc("/", hello)
http.HandleFunc("/hits", showHits)
}
func hello(w http.ResponseWriter, r *http.Request) {
name := r.Header.Get("name")
fmt.Fprintf(w, "Hello %s!", name)
}
func showHits(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "%d", hits(r))
}
func hits(r *http.Request) uint64 {
c := appengine.NewContext(r)
newValue, _ := memcache.Increment(c, "hits", 1, 0)
return newValue
}
According to the package docs:
func NoContent(w http.ResponseWriter, r *http.Request) {
// Set up any headers you want here.
w.WriteHeader(http.StatusNoContent) // send the headers with a 204 response code.
}
will send a 204 status to the client.
Sending 204 response from your script means your instance still need to run and cost you money. If you are looking for a caching solution. Google got it and it's called Edge Cache.
You only need to response with the following headers and Google will automatically cache your response in multiple servers nearest to the users (that is, replying with 204). This greatly enhance your site's speed and reduce instance cost.
w.Header().Set("Cache-Control", "public, max-age=86400")
w.Header().Set("Pragma", "Public")
You can adjust the max-age, but do it wisely.
By the way, it seems billing must be enabled in order to use Edge Cache

Upload file in GAE Go

I am trying to upload a file in my GAE app. How do I the upload of a file in Google App Engine using Go and using the r.FormValue()?
You have to go through the Blobstore Go API Overview to get an idea and there is a full example on how could you store & serve user data on Google App Engine using Go.
I would suggest you to do that example in a completely separate application, so you'll be able to experiment with it for a while before trying to integrate it to your already existing one.
I managed to solve my problem by using the middle return param, "other". These code below are inside the upload handler
blobs, other, err := blobstore.ParseUpload(r)
Then assign corresponding formkey
file := blobs["file"]
**name := other["name"]** //name is a form field
**description := other["description"]** //descriptionis a form field
And use it like this in my struct value assignment
newData := data{
Name: **string(name[0])**,
Description: **string(description[0])**,
Image: string(file[0].BlobKey),
}
datastore.Put(c, datastore.NewIncompleteKey(c, "data", nil), &newData )
Not 100% sure this is the right thing but this solves my problem and it is now uploading the image to blobstore and saving other data and blobkey to datastore.
Hope this could help others too.
I have tried the full example from here https://developers.google.com/appengine/docs/go/blobstore/overview, and it worked fine doing the upload in blobstore and serving it.
But inserting extra post values to be saved somewhere in the datastore erases the values of "r.FormValue() "? Please refer to the code below
func handleUpload(w http.ResponseWriter, r *http.Request) {
c := appengine.NewContext(r)
//tried to put the saving in the datastore here, it saves as expected with correct values but would raised a server error.
blobs, _, err := blobstore.ParseUpload(r)
if err != nil {
serveError(c, w, err)
return
}
file := blobs["file"]
if len(file) == 0 {
c.Errorf("no file uploaded")
http.Redirect(w, r, "/", http.StatusFound)
return
}
// a new row is inserted but no values in column name and description
newData:= data{
Name: r.FormValue("name"), //this is always blank
Description: r.FormValue("description"), //this is always blank
}
datastore.Put(c, datastore.NewIncompleteKey(c, "Data", nil), &newData)
//the image is displayed as expected
http.Redirect(w, r, "/serve/?blobKey="+string(file[0].BlobKey), http.StatusFound)
}
Is it not possible to combine the upload with regular data? How come the values of r.FormValue() seems to disappear except for the file (input file type)? Even if I would have to force upload first before associating the blobkey, as the result of the upload, to other data, it would not be possible since I could not pass any r.FormValue() to the upload handler(which like I said becomes empty, or would raised an error when accessed prior the blobs, _, err := blobstore.ParseUpload(r) statement). I hope someone could help me solve this problem. Thank you!
In addition to using the Blobstore API, you can just use the Request.FormFile() method to get the file upload content. Use net\http package documentation for additional help.
Using the Request directly allows you to skip setting up an blobstore.UploadUrl() before handling the upload POST message.
A simple example would be:
func uploadHandler(w http.ResponseWriter, r *http.Request) {
// Create an App Engine context.
c := appengine.NewContext(r)
// use FormFile()
f, _, err := r.FormFile("file")
if err != nil {
c.Errorf("FormFile error: %v", err)
return
}
defer f.Close()
// do something with the file here
c.Infof("Hey!!! got a file: %v", f)
}

Resources