How to use cloud.google.com/go/datastore with AppEngine? - google-app-engine

I'm migrating my Golang AppEngine app to 1.12+, and I need to switch to cloud.google.com/go/datastore. It's not clear to me how to use it with AppEngine, could someone please verify my assumptions?
My assumption is that somewhere inside main() I can run (note the context.Background()):
db, err := datastore.NewClient(context.Background(), datastore.DetectProjectID)
if err != nil {
panic(err)
}
defer db.Close()
And then from my handlers I can use that db:
func blah(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
key := datastore.NameKey("blah", blah, nil)
db.Get(ctx, key, blah2)
}
Am I correct? Or do I need to run datastore.NewClient() separately from each web handler?

I run datastore.NewClient() for each web handler. I only do this once for each web handler and only if that code needs to use the datastore. Also, make sure that you Close() it at the end of your handler otherwise you'll leak memory.

Related

why is GAE returning server error when the code works in the local development server?

I'm trying to test the Datastore functionality using Google App Engine and my code works as expected in the local development server:
// code based on the following guide: https://cloud.google.com/datastore/docs/reference/libraries#client-libraries-install-go
package datastoretest
import (
"fmt"
"log"
"net/http"
"cloud.google.com/go/datastore"
"google.golang.org/appengine"
)
type Task struct {
Description string
}
func init() {
http.HandleFunc("/", handler)
}
func handler(w http.ResponseWriter, r *http.Request) {
ctx := appengine.NewContext(r)
// Set Google Cloud Platform project ID.
projectID := "myProjectID" //note: actual ID is different
// Creates a client.
client, err := datastore.NewClient(ctx, projectID)
if err != nil {
log.Fatalf("Failed to create client: %v", err)
}
// Sets the kind for the new entity.
kind := "Task"
// Sets the name/ID for the new entity.
name := "sampletask1"
// Creates a Key instance.
taskKey := datastore.NameKey(kind, name, nil)
// Creates a Task instance.
task := Task{
Description: "Buy milk",
}
// Saves the new entity.
if _, err := client.Put(ctx, taskKey, &task); err != nil {
log.Fatalf("Failed to save task: %v", err)
}
fmt.Fprint(w, "Saved ", taskKey, ":", task.Description)
}
However, after being deployed to a GAE project, it's returning the following message to the visitor:
Error: Server Error
The server encountered an error and could not complete your request.
Please try again in 30 seconds.
I found out that the use of the package "cloud.google.com/go/datastore" was causing the problem. Instead, the solution should have been implemented using the package "google.golang.org/appengine/datastore". It appears that the former package was not compatible with GAE in my implementation. Switching to the latter package led to working code. For Datastore in GAE a better tutorial to follow should have been the following: https://cloud.google.com/appengine/docs/standard/go/getting-started/creating-guestbook

How to prevent / handle ErrBadConn with Azure SQL Database

I'm using this driver: https://github.com/denisenkom/go-mssqldb and on production with an Azure SQL Database Standard S3 level we are getting way too much ErrBadconn - driver: Bad connection returned.
How can I prevent or at least gracefully handle that. Here's some code to show how things are setup.
A typical database function call
package dal
var db *sql.DB
type Database struct{}
func (d Database) Open() {
newDB, err := sql.Open("mssql", os.Getenv("dbconnestion"))
if err != nil {
panic(err)
}
err = newDB.Ping()
if err != nil {
panic(err)
}
db = newDB
}
func (d Database) Close() {
db.Close()
}
// ... in another file
func (e *Entities) Add(entity Entity) (int64, error) {
stmt, err := db.Prepare("INSERT INTO Entities VALUES(?, ?)")
if err != nil {
return -1, err
}
defer stmt.Close()
result, err := stmt.Exec(entity.Field1, entity.Field2)
if err != nil {
return -1, err
}
return result.LastInsertId()
}
On a web api
func main() {
db := dal.Database{}
db.Open()
defer db.Close()
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
entities := &dal.Entites{}
id, err := entities.Add(dal.Entity{Field1: "a", Field2: "b"})
if err != nil {
// here all across my web api and other web package or cli cmd that uses the dal I'm getting random ErrBadConn
}
})
}
So in short, the dal package is shared across multiple Azure web apps and command line Go apps.
I cannot see a pattern, those errors, which are frequent and randomly occurring. We are using Bugsnag to log the errors from all our apps.
For completion, sometimes our Standard S3 limit of 200 concurrent connections is reached.
I've triple checked everywhere on the package that access the database, making sure that all sql.Rows were closed, all db.Prepare statement are closed. As and example here's how a typical query function looks like:
func (e *Entities) GetByID(id int64) ([]Entity, error) {
rows, err := db.Query("SELECT * FROM Entities WHERE ID = ?", id)
if err != nil {
return nil, err
}
defer rows.Close()
var results []Entity
for rows.Next() {
var r Entity
err := readEntity(rows, &r)
if err != nil {
return nil, err
}
results = append(results, r)
}
if err = rows.Err(); err != nil {
return nil, err
}
return results, nil
}
The readEntity is basically only doing Scan on the fields.
I don't think it's code related, unit tests run well locally. It's just once deployed to Azure after running for sometimes, the driver: Bad connection start to show up very frequently.
I've ran this query to try and see as suggested in this question: Azure SQL server max pool size was reached error
select * from sys.dm_exeC_requests
But I'm not exactly sure what should I be paying attention here.
Things I've did / made sure of.
As it's suggested, the database/sql should handle the connection pool, so having a global variable for the database connection should be fine.
Making sure sql.Rows and db.Prepare statement are closed everywhere.
Increased the Azure SQL level to S3.
There's an issue for the sql driver I'm using talking about Azure SQL making database connection is a bad state if they are idling for more thant 2 minutes.
https://github.com/denisenkom/go-mssqldb/issues/81
Does the way database/sql handle the connection pooling is in any way not working with the way Azure SQL Database are manage.
Is there a way to gracefully handle this? I know that C# / Entity Framework have a connection resiliency / retry logic for Azure SQL, is it for the similar reasons? How could I implement this without having to pass everywhere on my error handling? I mean I don't want to do something like this clearly:
if err == sql.ErrBadConn {
// close and re-open the global db object
// retry
}
This is certainly not my only option here?
Any help would be extremely welcome.
Thank you
I'm not seeing anywhere that you close your database. Best practice (in other languages - not positive about Go) is to close / deallocate / dereference the database object after use, to release the connection back into the pool. If you're running out of connection resources, you're being told that you need to release things. Holding the reference open means nobody else can use that connection, so it'll stay around until it gets recycled because it's timed out. This is why you're getting it intermittently, rather than consistently - it depends on having a certain number of connections taking place w/in a certain period of time.
Pierre,
How often do you run into these connection issues? I recommend you build retry logic to graciously fail from bad connections. Here is how you would do it with C#: https://azure.microsoft.com/en-us/documentation/articles/sql-database-develop-csharp-retry-windows/
If you still feel you need assistance, feel free to shoot me an email with your server name and database name at meetb#microsoft.com and we will get our team to look into this issue.
Cheers,
Meet

Google AppEngine DataStore Read & Write (Golang) [duplicate]

I am currently trying to test a piece of my code that runs a query on the datastore before putting in a new entity to ensure that duplicates are not created. The code I wrote works fine in the context of the app, but the tests I wrote for that methods are failing. It seems that I cannot access data put into the datastore through queries in the context of the testing package.
One possibility might lie in the output from goapp test which reads: Applying all pending transactions and saving the datastore. This line prints out after both the get and put methods are called (I verified this with log statements).
I tried closing the context and creating a new one for the different operations, but unfortunately that didn't help either. Below is a simple test case that Puts in an object and then runs a query on it. Any help would be appreciated.
type Entity struct {
Value string
}
func TestEntityQuery(t *testing.T) {
c, err := aetest.NewContext(nil)
if err != nil {
t.Fatal(err)
}
defer c.Close()
key := datastore.NewIncompleteKey(c, "Entity", nil)
key, err = datastore.Put(c, key, &Entity{Value: "test"})
if err != nil {
t.Fatal(err)
}
q := datastore.NewQuery("Entity").Filter("Value =", "test")
var entities []Entity
keys, err := q.GetAll(c, &entities)
if err != nil {
t.Fatal(err)
}
if len(keys) == 0 {
t.Error("No keys found in query")
}
if len(entities) == 0 {
t.Error("No entities found in query")
}
}
There is nothing wrong with your test code. The issue lies in the Datastore itself. Most queries in the HR Datastore are not "immediately consistent" but eventually consistent. You can read more about this in the Datastore documentation.
So basically what happens is that you put an entity into the Datastore, and the SDK's Datastore "simulates" the latency that you can observe in production, so if you run a query right after that (which is not an ancestor query), the query result will not include the new entity you just saved.
If you put a few seconds sleep between the datastore.Put() and q.GetAll(), you will see the test passes. Try it. In my test it was enough to sleep just 100ms, and the test always passed. But when writing tests for such cases, use the StronglyConsistentDatastore: true option as can be seen in JonhGB's answer.
You would also see the test pass without sleep if you'd use Ancestor queries because they are strongly consistent.
The way to do this is to force the datastore to be strongly consistent by setting up the context like this:
c, err := aetest.NewContext(&aetest.Options{StronglyConsistentDatastore: true})
if err != nil {
t.Fatal(err)
}
Now the datastore won't need any sleep to work, which is faster, and better practice in general.
Update: This only works with the old aetest package which was imported via appengine/aetest. It does not work with the newer aetest package which is imported with google.golang.org/appengine/aetest. App Engine has changed from using an appengine.Context to using a context.Context, and consequently the way that the test package now works is quite different.
To compliment #JohnGB's answer in the latest version of aetest, there are more steps to get a context with strong consistency. First create an instance, then create a request from that instance, which you can use to produce a context.
inst, err := aetest.NewInstance(
&aetest.Options{StronglyConsistentDatastore: true})
if err != nil {
t.Fatal(err)
}
defer inst.Close()
req, err := inst.NewRequest("GET", "/", nil)
if err != nil {
t.Fatal(err)
}
ctx := appengine.NewContext(req)

Is it possible to recover from panic on google app engine?

I'm wondering if it's possible to recover from a panic. It seems GAE has it's own panic recovery mechanism but I can't find any hook to handle it on my app.
Handlers in an AppEngine webapp are registered in the same way as would be in a normal Go application. You just don't have to call http.ListenAndServe() explicitly (because it will be by the platform), and handler registration happens in an init() function (not in main()).
Having said that, the same panic-recover wrapping works on AppEngine too, and unfortunately there is no other, better way.
Take a look at this example: it uses a function registered with HandleFunc() and a Handler registered with Handle() to handle 2 URL patterns, but both intentionally panics (they refuse to serve):
func myHandleFunc(w http.ResponseWriter, r *http.Request) {
panic("I'm myHandlerFunc and I refuse to serve!")
}
type MyHandler int
func (m *MyHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
panic("I'm MyHandler and I refuse to serve!")
}
func main() {
http.HandleFunc("/myfunc", myHandleFunc)
http.Handle("/myhandler", new(MyHandler))
panic(http.ListenAndServe(":8080", nil))
}
Directing your browser to http://localhost:8080/myfunc and http://localhost:8080/myhandler results in HTTP 500 status: internal server error (or an empty response depending on where you check it).
The general idea is to use recover to "catch" the panics from handlers (spec: Handling panics). We can "wrap" handle functions or Handlers in a way that we first register a defer statement which is called even if the rest of the function panics, and in which we recover from the panicing state.
See these 2 functions:
func protectFunc(hf func(http.ResponseWriter,
*http.Request)) func(http.ResponseWriter, *http.Request) {
return func(w http.ResponseWriter, r *http.Request) {
defer func() {
r := recover()
if r != nil {
// hf() paniced, we just recovered from it.
// Handle error somehow, serve custom error page.
w.Write([]byte("Something went bad but I recovered and sent this!"))
}
}()
hf(w, r)
}
}
func protectHandler(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
defer func() {
r := recover()
if r != nil {
// h.ServeHTTP() paniced, we just recovered from it.
// Handle error somehow, serve custom error page.
w.Write([]byte("Something went bad but I recovered and sent this!"))
}
}()
h.ServeHTTP(w, r)
})
}
The first one takes a function and returns one which calles the one we passed but recovers from panicing state if one was initiated.
The second one takes a Handler and returns another Handler which similarly calls the passed one but also handles panics and restores normal execution.
Now if we register handler functions and Handlers protected by these methods, the registered handlers will never panic (assuming the code after restoring normal execution does not panic):
http.HandleFunc("/myfunc-protected", protectFunc(myHandleFunc))
http.Handle("/myhandler-protected", protectHandler(new(MyHandler)))
Visiting http://localhost:8080/myfunc-protected and http://localhost:8080/myhandler-protected URLs resuls in HTTP 200 status (OK) with the message:
Something went bad but I recovered and sent this!

how to serve large zip files of Blobstorage images?

I want to serve a dynamic zip file with multiple user uploaded images that are stored in blobstorage
I'm successfuly doing it with the following code, but I encounter a problem where the Appengine instances are being terminated because they consume too much memory.
is it possible to serve such zip files by streaming them directly to the client and not keeping them in the memory? is there another solution?
w.Header().Set("Content-Type", "application/zip")
w.Header().Set("Content-Disposition", "attachment;filename=photos.zip")
writer := zip.NewWriter(w)
defer writer.Close()
for _, key := range l.Files {
info, err := blobstore.Stat(c, appengine.BlobKey(key))
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
wr, err := writer.Create(info.Filename)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
reader := blobstore.NewReader(c, appengine.BlobKey(key))
io.Copy(wr, reader)
}
You should probably create the zip file in the blobstore, then serve it from there:
Create the zip in the blobstore utilizing blobstore.Writer
Serve the zip using blobstore.Send
This way you can also speed up subsequent requests for the same zip/bundle as you will have already created/stored it.

Resources