Get an entity by a key passed via GET parameter - google-app-engine

I have
http://localhost:8080/?key=ahFkZXZ-ZGV2LWVkdW5hdGlvbnIOCxIIVXNlckluZm8YLAw
I would like to ask on how to:
Decode and convert the "key" to a *datastore.Key
And use it to get an entity.
Thanks for your help!

First: You should think about which packages you need this case. Since you're trying to read a GET value from a URL you need probably a function from net/http.
In particular: FormValue(key string) returns GET and POST parameters.
Second: Now open the appengine/datastore documentation and find functions which do the following:
Decode a string to a *datastore.Key (DecodeKey(encoded string))
Get a specified key from the datastore (Get(c appengine.Context, key *Key, dst interface{}))
Now it's a really easy thing:
func home(w http.Response, r *http.Request) {
c := appengine.NewContext(r)
// Get the key from the URL
keyURL := r.FormValue("key")
// Decode the key
key, err := datastore.DecodeKey(keyURL)
if err != nil { // Couldn't decode the key
// Do some error handling
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
// Get the key and load it into "data"
var data Data
err = datastore.Get(c, key, data)
if err != nil { // Couldn't find the entity
// Do some error handling
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
}

Related

How to pass C.ulong inter process?

In order to reuse CGO pointers (type C.uintptr_t) between multiple applications, I tried to use go rpc to pass the initialized pointer, but the program reported an error: rpc: gob error encoding body: gob: type not registered for interface: main._Ctype_ulong. I think there might be some issues with pointer types.
1. init func
func initApp(configPath *C.char) C.uintptr_t
2. App1, daemon process, call the init func, and pass the pointer to another by go rpc
var globalSDKPtr C.ulong
type HelloService struct{}
func (p *HelloService) Hello(request string, reply *C.ulong) error {
*reply = globalSDKPtr
return nil
}
func startRPS() {
rpc.RegisterName("HelloService", new(HelloService))
listener, err := net.Listen("tcp", ":1234")
if err != nil {
log.Fatal("ListenTCP error:", err)
}
conn, err := listener.Accept()
if err != nil {
log.Fatal("Accept error:", err)
}
rpc.ServeConn(conn)
}
3. App2, recevie the pointer reuse it.
client, err := rpc.Dial("tcp", "localhost:1234")
if err != nil {
log.Fatal("dialing:", err)
}
var reply C.ulong
err = client.Call("HelloService.Hello", "hello", &reply)
if err != nil {
log.Fatal(err)
}
res := C.query(reply)
I guess the reason for the problem is that my thinking is wrong. The way to reuse cgo pointers may not be the way of go rpc, but shared memory, but in any case, passing cgo-related things is always confusing. . Can anyone help me out.

How can i mock database calls without a library?

i've been trying to wrap my head around unit testing, dependency injection, tdd and all that stuff and i've been stuck on testing functions that make database calls, for example.
Let's say you have a PostgresStore struct that takes in a Database interface, which has a Query() method.
type PostgresStore struct {
db Database
}
type Database interface {
Query(query string, args ...interface{}) (*sql.Rows, error)
}
And your PostgresStore has a GetPatients method, which calls database query.
func (p *PostgresStore) GetPatients() ([]Patient, error) {
rows, err := p.db.Query("SELECT id, name, age, insurance FROM patients")
if err != nil {
return nil, err
}
defer rows.Close()
items := []Patient{}
for rows.Next() {
var i Patient
if err := rows.Scan(
&i.ID,
&i.Name,
&i.Surname,
&i.Age,
&i.InsuranceCompany,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
In the real implementation, you would just pass a *sql.DB as Database argument, but how would you guys write a unit test with a fake database struct?
let me try to clarify some of your doubts. First of all, I'm gonna share a working example to better understand what's going on. Then, I'm gonna mention all of the relevant aspects.
repo/db.go
package repo
import "database/sql"
type Patient struct {
ID int
Name string
Surname string
Age int
InsuranceCompany string
}
type PostgresStore struct {
// rely on the generic DB provided by the "sql" package
db *sql.DB
}
func (p *PostgresStore) GetPatient(id int) ([]Patient, error) {
rows, err := p.db.Query("SELECT id, name, age, insurance FROM patients")
if err != nil {
return nil, err
}
defer rows.Close()
items := []Patient{}
for rows.Next() {
var i Patient
if err := rows.Scan(
&i.ID,
&i.Name,
&i.Surname,
&i.Age,
&i.InsuranceCompany,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
Here, the only relevant change is how you define the PostgresStore struct. As the db field, you should rely on the generic DB provided by the database/sql package of the Go Standard Library. Thanks to this, it's trivial to swap its implementation with a fake one, as we're gonna see later.
Please note that in the GetPatient method you're accepting an id parameter but you're not using it. Your query is more suitable to a method like GetAllPatients or something like that. Be sure to fix it accordingly.
repo/db_test.go
package repo
import (
"testing"
"github.com/DATA-DOG/go-sqlmock"
"github.com/stretchr/testify/assert"
)
func TestGetPatient(t *testing.T) {
// 1. set up fake db and mock
db, mock, err := sqlmock.New()
if err != nil {
t.Fatalf("err not expected: %v", err)
}
// 2. configure the mock. What we expect (query or command)? The outcome (error vs no error).
rows := sqlmock.NewRows([]string{"id", "name", "surname", "age", "insurance"}).AddRow(1, "john", "doe", 23, "insurance-test")
mock.ExpectQuery("SELECT id, name, age, insurance FROM patients").WillReturnRows(rows)
// 3. instantiate the PostgresStore with the fake db
sut := &PostgresStore{
db: db,
}
// 4. invoke the action we've to test
got, err := sut.GetPatient(1)
// 5. assert the result
assert.Nil(t, err)
assert.Contains(t, got, Patient{1, "john", "doe", 23, "insurance-test"})
}
Here, there are a lot to cover. First, you can check the comments within the code that give you a better idea of each step. In the code, we're relying on the package github.com/DATA-DOG/go-sqlmock that allows us to easily mock a database client.
Obviously, the purpose of this code is to give a general idea on how to implement your needs. It can be written in a better way but it can be a good starting point for writing tests in this scenario.
Let me know if this helps, thanks!

Kindless ancestor query for different entity kinds in golang

According to the documentation, it should be possible to retrieve an ancestor and all of its descendants, regardless of their kind.
In my implementation, I have a different kind of ancestor and descendant. The following codes however always returns the error "invalid entity type":
q := datastore.NewQuery("").Ancestor(tomKey)
t := q.Run(ctx)
for {
var x interface{}
_, err := t.Next(&x)
if err == datastore.Done {
break
}
if err != nil {
log.Errorf(ctx, "Error fetching entity: %v", err)
break
}
}
It seems that the call to t.Next(&x) expects a specific type instead of an empty interface. Would somebody please help me to resolve this problem?
I don't know the documentation is wrong, but you can use datastore.PropertyList to fetch arbitrary value. like this:
var v datastore.PropertyList
key, err := iter.Next(&v)
...
props, err := v.Save()
...
See this docs for more information.

Weird datastore error in Go, "The kind is the empty string"

I am recently getting an error that I have never seen before when making a simple datastore.GetAll() request. I can't figure out what it means and I can't find any documentation with the error message or any help from Googleing the error message.
Here's my code:
type MyUnderlyingStruct struct {
ApplyTo *datastore.Key
ApplyFrom *datastore.Key
Amount float64
LocationKey *datastore.Key
DepartmentKey *datastore.Key
SubjectAreaKey *datastore.Key
}
type MyStruct []MyUnderlyingStruct
//In the case where I get the error someKey is a valid, complete Key value
// of a different kind that what we are querying for and there is actually
// an entity in my datastore that matches this query
func (x *MyStruct) Load(w http.ResponseWriter, r *http.Request, someKey *datastore.Key) (error) {
c := appengine.NewContext(r)
q := datastore.NewQuery("MyUnderlyingStruct_KindName").Order("-Amount")
if someKey != nil { q = q.Filter("ApplyTo=", someKey) }
keys, err := q.GetAll(c,x)
if _, ok := err.(*datastore.ErrFieldMismatch); ok { err = nil }
if err != nil && err != datastore.Done {return err}
return nil
}
Which returns this error:
API error 1 (datastore_v3: BAD_REQUEST): The kind is the empty string.
Can anyone tell me why I am getting this error, or what it is trying to tell me?
Looking at your issue on the first glance (because I am not familiar with Google's datastore API), it seems to me the problem is a result of zeroed-memory initialization using new keyword.
When a struct is created with the keyword without assigning starting values for the fields, 0's are given as default. When mapped to string, it's "" (empty). Go actually threw a very helpful error for you.
As you have pointed out, you have used Mykey := new(datastore.Key). Thanks for your generosity and this can serve as answer for future users.

Is it possible to store arbitrary data in GAE Golang Blobstore?

I am creating a large database application in Google App Engine Go. Most of my pieces of data are small, so I have no problem storing them in Datastore. However, I know I will run into a few entries that will be a few megabytes big, so I will have to use Blobstore to save them.
Looking at the reference for Blobstore, it appears that the service was mainly intended to be used for files being uploaded to the service. What are the functions I need to call to store arbitrary data in the Blobstore like I would in Datastore? I can already convert the data to []byte and I don't need to index anything in the blob, just to store and fetch it by ID.
There are two ways that you could write files to the blobstore
One is to use a deprecated API documented at the end of the page for the blobstore. Their example code is below.
The approach that they are going to be switching to is storing files in Google cloud storage and serving them via the blobstore.
The other approach would be to simulate a user upload in some fashion. Go has an http client that can send files to be uploaded to web addresses. That would be a hacky way to do it though.
var k appengine.BlobKey
w, err := blobstore.Create(c, "application/octet-stream")
if err != nil {
return k, err
}
_, err = w.Write([]byte("... some data ..."))
if err != nil {
return k, err
}
err = w.Close()
if err != nil {
return k, err
}
return w.Key()
As #yumaikas said, the Files API is deprecated. If this data comes from some sort of a user upload, you should modify the upload form to work with Blobstore Upload URLs (in particular, setting the encoding to multipart/form-data or multipart/mixed and naming all file upload fields file, except the ones that you don't want to be stored in blobstore).
However, if that is not possible (e.g. you don't have control over the user input, or you have to pre-process the data on the server before you store it in Blobstore), then you'll either have to use the deprecated Files API, or upload the data using the URLFetch API.
Here's a complete example application that will store a sample file for you in Blobstore.
package sample
import (
"bytes"
"net/http"
"mime/multipart"
"appengine"
"appengine/blobstore"
"appengine/urlfetch"
)
const SampleData = `foo,bar,spam,eggs`
func init() {
http.HandleFunc("/test", StoreSomeData)
http.HandleFunc("/upload", Upload)
}
func StoreSomeData(w http.ResponseWriter, r *http.Request) {
c := appengine.NewContext(r)
// First you need to create the upload URL:
u, err := blobstore.UploadURL(c, "/upload", nil)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
c.Errorf("%s", err)
return
}
// Now you can prepare a form that you will submit to that URL.
var b bytes.Buffer
fw := multipart.NewWriter(&b)
// Do not change the form field, it must be "file"!
// You are free to change the filename though, it will be stored in the BlobInfo.
file, err := fw.CreateFormFile("file", "example.csv")
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
c.Errorf("%s", err)
return
}
if _, err = file.Write([]byte(SampleData)); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
c.Errorf("%s", err)
return
}
// Don't forget to close the multipart writer.
// If you don't close it, your request will be missing the terminating boundary.
fw.Close()
// Now that you have a form, you can submit it to your handler.
req, err := http.NewRequest("POST", u.String(), &b)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
c.Errorf("%s", err)
return
}
// Don't forget to set the content type, this will contain the boundary.
req.Header.Set("Content-Type", fw.FormDataContentType())
// Now submit the request.
client := urlfetch.Client(c)
res, err := client.Do(req)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
c.Errorf("%s", err)
return
}
// Check the response status, it should be whatever you return in the `/upload` handler.
if res.StatusCode != http.StatusCreated {
http.Error(w, err.Error(), http.StatusInternalServerError)
c.Errorf("bad status: %s", res.Status)
return
}
// Everything went fine.
w.WriteHeader(res.StatusCode)
}
func Upload(w http.ResponseWriter, r *http.Request) {
c := appengine.NewContext(r)
// Here we just checked that the upload went through as expected.
if _, _, err := blobstore.ParseUpload(r); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
c.Errorf("%s", err)
return
}
// Everything seems fine. Signal the other handler using the status code.
w.WriteHeader(http.StatusCreated)
}
Now if you curl http://localhost:8080/test, it will store a file in the Blobstore.
Important: I'm not exactly sure how you would be charged for bandwidth for the request that you make to your own app. At the worst case, you will be charged for internal traffic, which is cheaper from normal bandwidth iirc.

Resources