access properties of referenced objects in golang template (Google app engine) - google-app-engine

I have a data model Sections:
type Sections struct{
SectionName string
IsFather bool
ParentSection *datastore.Key
}
I pass sections as value to golang template and I want to get ParentSection name ParentSection.SectionName so how can I do this from template like jinja2 in python {{ParentSection.get().SectionName}}?

The html/template package is not "appengine-aware", it does not know about the GAE platform, and it does not support automatic resolution of such references.
By design philosophy, templates should not contain complex logic. If something is (or looks) too complex in templates, it should be implemented in functions. And you may register your custom functions with the Template.Funcs() method which you can call from templates.
For your use case I recommend the following custom function which loads a Sections by its key:
func loadSections(ctx appengine.Context, k *datastore.Key) (*Sections, error) {
s := Sections{}
err := datastore.Get(ctx, k, &s)
return &s, err
}
Note that you need the Context for loading entities from the Datastore, so you have to make it available too in the template params. So your template params may look like this:
ctx := appengine.NewContext(r)
m := map[string]interface{}{
"Sections": s, // A previously loaded Sections
"Ctx": ctx,
}
And by registering and using this function, you can get what you want:
t := template.Must(template.New("").Funcs(template.FuncMap{
"loadSections": loadSections,
}).Parse(`my section name: {{.Sections.SectionName}},
parent section name: {{(loadSections .Ctx .Sections.ParentSection).SectionName}}`))
t.Execute(w, m)
Now let's say you have a parent Sections whose name is "parSecName", and you have a child Sections whose name is "childSecName", and child's ParentSection points to the parent's Sections. Executing the above template you'll see this result:
my section name: childSecName,
parent section name: parSecName
Complete example
See this complete working example. Note: it is for demonstration purposes only, it is not optimized for production.
It registers the /put path to insert 2 Sections. And you may use any other path to execute the template. So test it like this:
First insert 2 Sections:
http://localhost:8080/put
Then execute and view template result:
http://localhost:8080/view
Complete runnable code:
// +build appengine
package gplay
import (
"appengine"
"appengine/datastore"
"html/template"
"net/http"
)
func init() {
http.HandleFunc("/put", puthandler)
http.HandleFunc("/", myhandler)
}
func myhandler(w http.ResponseWriter, r *http.Request) {
ctx := appengine.NewContext(r)
s := Sections{}
if err := datastore.Get(ctx, datastore.NewKey(ctx, "Sections", "", 2, nil), &s); err != nil {
panic(err)
}
m := map[string]interface{}{
"Sections": s,
"Ctx": ctx,
}
t := template.Must(template.New("").Funcs(template.FuncMap{
"loadSections": loadSections,
}).Parse(`my section name: {{.Sections.SectionName}},
parent section name: {{(loadSections .Ctx .Sections.ParentSection).SectionName}}`))
t.Execute(w, m)
}
func loadSections(ctx appengine.Context, k *datastore.Key) (*Sections, error) {
s := Sections{}
err := datastore.Get(ctx, k, &s)
return &s, err
}
func puthandler(w http.ResponseWriter, r *http.Request) {
ctx := appengine.NewContext(r)
s := Sections{"parSecName", false, nil}
var k *datastore.Key
var err error
if k, err = datastore.Put(ctx, datastore.NewKey(ctx, "Sections", "", 1, nil), &s); err != nil {
panic(err)
}
s.SectionName = "childSecName"
s.ParentSection = k
if _, err = datastore.Put(ctx, datastore.NewKey(ctx, "Sections", "", 2, nil), &s); err != nil {
panic(err)
}
}
type Sections struct {
SectionName string
IsFather bool
ParentSection *datastore.Key
}
Some notes
This child-parent relation can be modeled with the Key itself as a key may optionally contain a parent Key.
If you don't want to "store" the parent Key in the entity's key itself, it may also be enough to just store either the key's name or the key's ID (depending on what you use), as from that the key can be constructed.

Related

How can i mock database calls without a library?

i've been trying to wrap my head around unit testing, dependency injection, tdd and all that stuff and i've been stuck on testing functions that make database calls, for example.
Let's say you have a PostgresStore struct that takes in a Database interface, which has a Query() method.
type PostgresStore struct {
db Database
}
type Database interface {
Query(query string, args ...interface{}) (*sql.Rows, error)
}
And your PostgresStore has a GetPatients method, which calls database query.
func (p *PostgresStore) GetPatients() ([]Patient, error) {
rows, err := p.db.Query("SELECT id, name, age, insurance FROM patients")
if err != nil {
return nil, err
}
defer rows.Close()
items := []Patient{}
for rows.Next() {
var i Patient
if err := rows.Scan(
&i.ID,
&i.Name,
&i.Surname,
&i.Age,
&i.InsuranceCompany,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
In the real implementation, you would just pass a *sql.DB as Database argument, but how would you guys write a unit test with a fake database struct?
let me try to clarify some of your doubts. First of all, I'm gonna share a working example to better understand what's going on. Then, I'm gonna mention all of the relevant aspects.
repo/db.go
package repo
import "database/sql"
type Patient struct {
ID int
Name string
Surname string
Age int
InsuranceCompany string
}
type PostgresStore struct {
// rely on the generic DB provided by the "sql" package
db *sql.DB
}
func (p *PostgresStore) GetPatient(id int) ([]Patient, error) {
rows, err := p.db.Query("SELECT id, name, age, insurance FROM patients")
if err != nil {
return nil, err
}
defer rows.Close()
items := []Patient{}
for rows.Next() {
var i Patient
if err := rows.Scan(
&i.ID,
&i.Name,
&i.Surname,
&i.Age,
&i.InsuranceCompany,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
Here, the only relevant change is how you define the PostgresStore struct. As the db field, you should rely on the generic DB provided by the database/sql package of the Go Standard Library. Thanks to this, it's trivial to swap its implementation with a fake one, as we're gonna see later.
Please note that in the GetPatient method you're accepting an id parameter but you're not using it. Your query is more suitable to a method like GetAllPatients or something like that. Be sure to fix it accordingly.
repo/db_test.go
package repo
import (
"testing"
"github.com/DATA-DOG/go-sqlmock"
"github.com/stretchr/testify/assert"
)
func TestGetPatient(t *testing.T) {
// 1. set up fake db and mock
db, mock, err := sqlmock.New()
if err != nil {
t.Fatalf("err not expected: %v", err)
}
// 2. configure the mock. What we expect (query or command)? The outcome (error vs no error).
rows := sqlmock.NewRows([]string{"id", "name", "surname", "age", "insurance"}).AddRow(1, "john", "doe", 23, "insurance-test")
mock.ExpectQuery("SELECT id, name, age, insurance FROM patients").WillReturnRows(rows)
// 3. instantiate the PostgresStore with the fake db
sut := &PostgresStore{
db: db,
}
// 4. invoke the action we've to test
got, err := sut.GetPatient(1)
// 5. assert the result
assert.Nil(t, err)
assert.Contains(t, got, Patient{1, "john", "doe", 23, "insurance-test"})
}
Here, there are a lot to cover. First, you can check the comments within the code that give you a better idea of each step. In the code, we're relying on the package github.com/DATA-DOG/go-sqlmock that allows us to easily mock a database client.
Obviously, the purpose of this code is to give a general idea on how to implement your needs. It can be written in a better way but it can be a good starting point for writing tests in this scenario.
Let me know if this helps, thanks!

Publishing to google pub sub asynchronously through goroutine

I'm trying to push the message to google pub-sub asynchronously through goroutine but I'm facing below error
panic: not an App Engine context
I'm using mux and have an api handler
n = 1 million
func apihandler(w http.ResponseWriter, r *http.Request) {
go createuniquecodes(n)
return "request running in background"
}
func createuniquecodes(n) {
c := make(chan string)
go createuniquecodes(c, n)
for val := range c {
publishtopubsub(val)
}
}
func createuniquecodes(n) {
for i := 0; i < n; i++ {
uniquecode := some random string
// publish to channel and pubsub
c <- uniquecode
}
close(c)
}
func publishuq(msg string) error {
ctx := context.Background()
client, err := pubsub.NewClient(ctx, projectId)
if err != nil {
log.Fatalf("Could not create pubsub Client: %v", err)
}
t := client.Topic(topicName)
result := t.Publish(ctx, &pubsub.Message{
Data: []byte(msg),
})
id, err := result.Get(ctx)
if err != nil {
return err
}
fmt.Printf("Published a message; msg ID: %v\n", id)
return nil
}
Please note that I need to generate 5 million unique codes,
How will I define a context in go routine since I'm doing everything asynchronously
I assume you're using the App Engine standard (not flexible) environment. Please note that a "request handler (apihandler in your case) has a limited amount of time to generate and return a response to a request, typically around 60 seconds. Once the deadline has been reached, the request handler is interrupted".
You're trying to "break out" of the request when calling go createuniquecodes(n) and then ctx := context.Background() down the line is what panics with not an App Engine context. You could technically use NewContext(req *http.Request) to derive a valid context from the original context, but again, you'd only have 60s before your request times out.
Please have a look at TaskQueues, as they " let applications perform work, called tasks, asynchronously outside of a user request."

How to query a entity from datastore with Namespace In golang?

I am working on a multi tenant application, I need to query a particular user from a KIND and From Particular Namespace.
I am able to get the values from default Namespace.the package i am using here is "google.golang.org/appengine/datastore"
q := datastore.NewQuery(ENTITYNAME).Filter("Name =", ed.Expense.Name)
var expenses []ExpenseEntiry
return q.GetAll(ed.Ctx, &expenses)
The namespace value is not part of the query (it's not a property of the query). The namespace comes from the context which you pass when executing the query, e.g. to Query.GetAll().
If you have a context (you do as you pass it to q.GetAll()), you can create a derivative context with a given namespace using the appengine.Namespace() function.
For example:
ctx2, err := appengine.Namespace(ed.Ctx, "mynamespace")
// check err
And use this new context to pass to Query.GetAll():
return q.GetAll(ctx2, &expenses)
It is rare that you need to create a new context with a different namespace, ed.Ctx should already be a context with the right namespace. So when / where you create ed.Ctx, you should already apply the namespace there, so you can avoid "accidental" exposure of data of other tenants (which is a major security issue).
If you are using the old lib: google.golang.org/appengine/datastore, then you need to create the context with the namespace:
ctx2, err := appengine.Namespace(ed.Ctx, "mynamespace")
if err != nil {
return err
}
But you WANT to be using the latest lib: cloud.google.com/go/datastore. The Namespace can be set directly on the Query object. This is new. You must then run the query using datastoreClient.Run(ctx, query).
func deleteTestNamespace(ctx context.Context, namespaces string) error {
dsClient, err := datastore.NewClient(ctx, log, datastore.Config{...})
err := dsClient.DeleteMulti(ctx, keys[i:i+chunk])
if err != nil {
return err
}
var keys []*datastore.Key
for _, kind := range envKinds {
// Get all keys
query := datastore.NewQuery(kind).KeysOnly().Namespace(namespace)
it := dsClient.Run(ctx, query)
for {
var key datastore.Key
_, err := it.Next(&key)
if err == iterator.Done {
break
}
if err != nil {
return err
}
keys = append(keys, &key)
}
// Delete all records in chunks of 500 or less
for i := 0; i < len(keys); i += 500 {
chunk := min(len(keys)-i, 500)
err := dsClient.DeleteMulti(ctx, keys[i:i+chunk])
if err != nil {
return err
}
}
}
return nil
}
func min(num1 int, num2 int) int {
if num1 < num2 {
return num1
}
return num2
}

gob: interface is only registered on Encode but not on Decode

I'm working on an appengine app using the datastore. I'm attempting to gob
encode an interface and store it into the datastore. But when I try to load from
the datastore, I get the error:
gob: name not registered for interface: "main27155.strand"
The peculiar thing is that the load() method starts working after having
called the save() method. It no longer returns an error, and everything saved
in the datastore is loaded as expected. But when I restart the intance, the
load() method stops working again.
The load and save methods I mention refer to the methods defined by the
datastore.PropertyLoadSaver interface
From the looks of it, it seems like a problem with registering the
type/interfaces with gob, but I have exactly the same gob.Register() calls in
both the load() and save() methods.
I even tried removing the gob.Register() calls from both load and save methods
and adding it to init(). The exact same behavior was observed.
How can I load my datastore on a cold start?
type bio struct {¬
Id string¬
Hp int¬
godie chan bool //should be buffered¬
dam chan int¬
Genetics dna¬
}¬
type dna interface {
decode() mRNA
Get(int) trait
Set(int, trait)
Duplicate() dna
Len() int
}
type trait interface {
mutate() trait
}
// implements dna{}
type strand []trait
// implements trait{}
type tdecoration string
type color struct {
None bool // If true, colors are not shown in theme
Bg bool // If true, color is a background color
R int // 0-255
G int
B int
}
.
func start(w http.ResponseWriter, r *http.Request) error {
c := appengine.NewContext(r)
var bs []bio
if _, err := datastore.NewQuery("bio").GetAll(c, &bs); err != nil {
log.Println("bs is len: ", len(bs))
return err
}
...
return nil
}
func stop(w http.ResponseWriter, r *http.Request) error {
c := appengine.NewContext(r)
log.Println("Saving top 20 colors")
var k []*datastore.Key
var bs []*bio
stat := getStats()
for i, b := range stat.Leaderboard {
k = append(k, datastore.NewKey(c, "bio", b.Id, 0, nil))
bv := b
bs = append(bs, &bv)
// At most 20 bios survive across reboots
if i > 178 {
break
}
}
// Assemble slice of keys for deletion
dk, err := datastore.NewQuery("bio").KeysOnly().GetAll(c, nil)
if err != nil {
return errors.New(fmt.Sprintf("Query error: %s", err.Error()))
}
fn := func(c appengine.Context) error {
// Delete all old entries
err := datastore.DeleteMulti(c, dk)
if err != nil {
return errors.New(fmt.Sprintf("Delete error: %s", err.Error()))
}
// save the elite in the datastore
_, err = datastore.PutMulti(c, k, bs)
if err != nil {
return err
}
return nil
}
return datastore.RunInTransaction(c, fn, &datastore.TransactionOptions{XG: true})
}
// satisfy datastore PropertyLoadSaver interface ===============================
func (b *bio) Load(c <-chan datastore.Property) error {
gob.Register(&color{})
gob.Register(new(tdecoration))
var str strand
gob.Register(str)
tmp := struct {
Id string
Hp int
Gengob []byte
}{}
if err := datastore.LoadStruct(&tmp, c); err != nil {
return err
}
b.Id = tmp.Id
b.Hp = tmp.Hp
return gob.NewDecoder(strings.NewReader(string(tmp.Gengob))).Decode(&(b.Genetics))
}
func (b *bio) Save(c chan<- datastore.Property) error {
defer close(c)
gob.Register(&color{})
gob.Register(new(tdecoration))
var str strand
gob.Register(str)
var buf bytes.Buffer
gen := b.Genetics
if err := gob.NewEncoder(&buf).Encode(&gen); err != nil {
log.Println(err)
return err
}
dp := []datastore.Property{
{Name: "Id", Value: b.Id},
{Name: "Hp", Value: int64(b.Hp)},
{Name: "Gengob", Value: buf.Bytes(), NoIndex: true},
}
for _, p := range dp {
c <- p
}
return nil
}
Additional info: This behavior was not present before I stuffed the datastore
calls in stop() into datastore.RunInTransaction()
Register all types an in init() functions using RegisterName(). Delete all existing data from the store and you should be good to go.
App Engine generates a mangled name for the main package every time the application is built. The name generated by Register() includes this mangled package name. Any gobs encoded with the mangled name will only be readable using the same build of the app. If you cause the application to be rebuilt by modifying the code, then the app will not be able to decode gobs stored previously.

Why do I get "invalid entity type" with datastore.Put using a datastore.PropertyList in a Go AppEngine aetest?

This test fails with partnermerge_test.go:22: datastore: invalid entity type
package bigdipper
import (
"testing"
"appengine/aetest"
"appengine/datastore"
)
func TestCreateMigrationProposal(t *testing.T) {
c, err := aetest.NewContext(nil)
if err != nil {
t.Fatal(err)
}
defer c.Close()
if _, err := datastore.Put(
c,
datastore.NewKey(c, "ORDER", "order-id-1", 0, nil),
datastore.PropertyList{}); err != nil {
t.Fatal(err)
}
}
The docs for the datastore.Put function say:
Put saves the entity src into the datastore with key k. src must be a
struct pointer or implement PropertyLoadSaver; if a struct pointer
then any unexported fields of that struct will be skipped. If k is an
incomplete key, the returned key will be a unique key generated by the
datastore.
This was somewhat confusing when trying to use this with a PropertyList as the src. A PropertyList does not implement PropertyLoadSaver, but a *PropertyList does. Adding an & before PropertyList to get a pointer to it fixes this test.
package bigdipper
import (
"testing"
"appengine/aetest"
"appengine/datastore"
)
func TestCreateMigrationProposal(t *testing.T) {
c, err := aetest.NewContext(nil)
if err != nil {
t.Fatal(err)
}
defer c.Close()
if _, err := datastore.Put(
c,
datastore.NewKey(c, "ORDER", "order-id-1", 0, nil),
&datastore.PropertyList{}); err != nil {
t.Fatal(err)
}
}

Resources