I am working on a multi tenant application, I need to query a particular user from a KIND and From Particular Namespace.
I am able to get the values from default Namespace.the package i am using here is "google.golang.org/appengine/datastore"
q := datastore.NewQuery(ENTITYNAME).Filter("Name =", ed.Expense.Name)
var expenses []ExpenseEntiry
return q.GetAll(ed.Ctx, &expenses)
The namespace value is not part of the query (it's not a property of the query). The namespace comes from the context which you pass when executing the query, e.g. to Query.GetAll().
If you have a context (you do as you pass it to q.GetAll()), you can create a derivative context with a given namespace using the appengine.Namespace() function.
For example:
ctx2, err := appengine.Namespace(ed.Ctx, "mynamespace")
// check err
And use this new context to pass to Query.GetAll():
return q.GetAll(ctx2, &expenses)
It is rare that you need to create a new context with a different namespace, ed.Ctx should already be a context with the right namespace. So when / where you create ed.Ctx, you should already apply the namespace there, so you can avoid "accidental" exposure of data of other tenants (which is a major security issue).
If you are using the old lib: google.golang.org/appengine/datastore, then you need to create the context with the namespace:
ctx2, err := appengine.Namespace(ed.Ctx, "mynamespace")
if err != nil {
return err
}
But you WANT to be using the latest lib: cloud.google.com/go/datastore. The Namespace can be set directly on the Query object. This is new. You must then run the query using datastoreClient.Run(ctx, query).
func deleteTestNamespace(ctx context.Context, namespaces string) error {
dsClient, err := datastore.NewClient(ctx, log, datastore.Config{...})
err := dsClient.DeleteMulti(ctx, keys[i:i+chunk])
if err != nil {
return err
}
var keys []*datastore.Key
for _, kind := range envKinds {
// Get all keys
query := datastore.NewQuery(kind).KeysOnly().Namespace(namespace)
it := dsClient.Run(ctx, query)
for {
var key datastore.Key
_, err := it.Next(&key)
if err == iterator.Done {
break
}
if err != nil {
return err
}
keys = append(keys, &key)
}
// Delete all records in chunks of 500 or less
for i := 0; i < len(keys); i += 500 {
chunk := min(len(keys)-i, 500)
err := dsClient.DeleteMulti(ctx, keys[i:i+chunk])
if err != nil {
return err
}
}
}
return nil
}
func min(num1 int, num2 int) int {
if num1 < num2 {
return num1
}
return num2
}
Related
i've been trying to wrap my head around unit testing, dependency injection, tdd and all that stuff and i've been stuck on testing functions that make database calls, for example.
Let's say you have a PostgresStore struct that takes in a Database interface, which has a Query() method.
type PostgresStore struct {
db Database
}
type Database interface {
Query(query string, args ...interface{}) (*sql.Rows, error)
}
And your PostgresStore has a GetPatients method, which calls database query.
func (p *PostgresStore) GetPatients() ([]Patient, error) {
rows, err := p.db.Query("SELECT id, name, age, insurance FROM patients")
if err != nil {
return nil, err
}
defer rows.Close()
items := []Patient{}
for rows.Next() {
var i Patient
if err := rows.Scan(
&i.ID,
&i.Name,
&i.Surname,
&i.Age,
&i.InsuranceCompany,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
In the real implementation, you would just pass a *sql.DB as Database argument, but how would you guys write a unit test with a fake database struct?
let me try to clarify some of your doubts. First of all, I'm gonna share a working example to better understand what's going on. Then, I'm gonna mention all of the relevant aspects.
repo/db.go
package repo
import "database/sql"
type Patient struct {
ID int
Name string
Surname string
Age int
InsuranceCompany string
}
type PostgresStore struct {
// rely on the generic DB provided by the "sql" package
db *sql.DB
}
func (p *PostgresStore) GetPatient(id int) ([]Patient, error) {
rows, err := p.db.Query("SELECT id, name, age, insurance FROM patients")
if err != nil {
return nil, err
}
defer rows.Close()
items := []Patient{}
for rows.Next() {
var i Patient
if err := rows.Scan(
&i.ID,
&i.Name,
&i.Surname,
&i.Age,
&i.InsuranceCompany,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
Here, the only relevant change is how you define the PostgresStore struct. As the db field, you should rely on the generic DB provided by the database/sql package of the Go Standard Library. Thanks to this, it's trivial to swap its implementation with a fake one, as we're gonna see later.
Please note that in the GetPatient method you're accepting an id parameter but you're not using it. Your query is more suitable to a method like GetAllPatients or something like that. Be sure to fix it accordingly.
repo/db_test.go
package repo
import (
"testing"
"github.com/DATA-DOG/go-sqlmock"
"github.com/stretchr/testify/assert"
)
func TestGetPatient(t *testing.T) {
// 1. set up fake db and mock
db, mock, err := sqlmock.New()
if err != nil {
t.Fatalf("err not expected: %v", err)
}
// 2. configure the mock. What we expect (query or command)? The outcome (error vs no error).
rows := sqlmock.NewRows([]string{"id", "name", "surname", "age", "insurance"}).AddRow(1, "john", "doe", 23, "insurance-test")
mock.ExpectQuery("SELECT id, name, age, insurance FROM patients").WillReturnRows(rows)
// 3. instantiate the PostgresStore with the fake db
sut := &PostgresStore{
db: db,
}
// 4. invoke the action we've to test
got, err := sut.GetPatient(1)
// 5. assert the result
assert.Nil(t, err)
assert.Contains(t, got, Patient{1, "john", "doe", 23, "insurance-test"})
}
Here, there are a lot to cover. First, you can check the comments within the code that give you a better idea of each step. In the code, we're relying on the package github.com/DATA-DOG/go-sqlmock that allows us to easily mock a database client.
Obviously, the purpose of this code is to give a general idea on how to implement your needs. It can be written in a better way but it can be a good starting point for writing tests in this scenario.
Let me know if this helps, thanks!
A common pattern in monolithic application design is to delegate business logic to a dedicated service, passing in an open transaction as, for example, a javax.persistence.EntityTransaction instance in Java, or an sql.Transaction in Go.
Go example:
// business.go
type BusinessLogicService interface {
DoSomething(tx *sql.Transaction)
}
type businessLogicService struct {
}
func (s *BusinessLogicService) DoSomething(tx *sql.Transaction) {
tx.ExecuteContext(.....)
}
func NewBusinessLogicService() {
return &businessLogicService{}
}
// server.go
ctx := context.Background()
tx, err := db.BeginTx(ctx)
if err != nil {
log.Fatal(err)
}
bls := business.NewBusinessLogicService()
bls.DoSomething(tx)
tx.Commit()
Could the same effect be achieved in an architecture where each of these components are implemented in a different language/runtime? In such an application, Postgres is responsible for doing the 'bookkeeping' in relation to the DB transaction. It seems to me that it should be possible to pass a similar 'handle' for the transaction to another process to read its state and append operations.
For example the equivalent business logic is provided as a gRPC service with the following definition:
message TransactionInfo {
string transaction_id = 1;
}
message DoSomethingRequest {
TransactionInfo transaction_info = 1;
}
message DoSomethingResponse {
}
service BusinessLogicService {
rpc DoSomething(DoSomethingRequest) returns (DoSomethingResponse)
}
The server process BEGINs the transaction and passes a reference to this BusinessLogicService.
ctx := context.Background()
tx, err := db.BeginTx(ctx)
if err != nil {
log.Fatal(err)
}
conn, err := grpc.Dial(*serverAddr, opts...)
if err != nil {
...
}
defer conn.Close()
bls := pb.NewBusinessLogicClient()
/// SOMEHOW PASS THE TX OBJECT TO THE REMOTE SERVICE
txObj := &pb.TransactionInfo{....???????????.....}
result := bls.DoSomething(txObj)
tx.Commit()
Is this possible with Postgres or another DBMS?
Is there a way I can set gorm to time out after a configurable period of time when running a long query? I am using mssql. I have looked through the documentation and haven't discovered a way yet.
This code seems to work for me and is pretty clean. Just use transactions I guess.
//Gorm query below
query = query.Where(whereClause)
//Set up Timeout
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
var x sql.TxOptions
db := query.BeginTx(ctx, &x)
defer db.Commit()
// Execute the query
if err := db.Find(*results).Error; err != nil {
return err
}
By using WithContext from *gorm.DB you can pass a Timeout Context to Gorm:
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
var products []Product
db.WithContext(ctx).Find(&products)
gorm at the moment seems is not implementing any query that accepts as a parameter a context.Context, as e.g. QueryRowContext is doing.
You can create a workaround and use a Context to make "expirable" your query.
type QueryResponse struct {
MyResult *MyResult
Error error
}
func queryHelper(ctx context.Context) <- chan *QueryResponse {
chResult := make(chan *QueryResponse, 1)
go func() {
//your query here
//...
//blah blah check stuff do whatever you want
//err is an error that comes from the query code
if err != nil {
chResult <- &QueryResponse{nil, err}
return
}
chResult <- &QueryResponse{queryResponse, nil}
} ()
return chResult
}
func MyQueryFunction(ctx context.Context) (*MyResult, error) {
select {
case <-ctx.Done():
return nil, fmt.Errorf("context timeout, query out of time")
case res := <-queryHelper(ctx):
return res.MyResult, res.Error
}
}
And then in your upper function, whatever is it you can create a context and pass it to the MyQueryFunction. If the query exceeds the time you have set an error is raised and you should (must) check it.
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
res, err := MyQueryFunction(ctx)
if err != nil {
fmt.Printf("err %v", err)
} else {
fmt.Printf("res %v", res)
}
Of course, it is an example, is not considering a lot of use cases and a proper implementation inside the framework must be preferred.
I'm trying to PUT my GridDB container (a simple container for users on my website) but it's having issues.
I've confirmed that the sample code for the go_client works so it's not an issue of improper build or anything of that sort.
func getAdminUsers(c echo.Context) error {
var tmp []interface{}
col, err1 := gridstore.GetContainer("users")
if err1 != nil {
fmt.Println("get container failed")
}
col.SetAutoCommit(true)
// Create normal query
query, err := col.Query("SELECT *")
if err != nil {
fmt.Println("create query failed")
}
//Execute query
rs, err := query.Fetch(true)
if err != nil {
fmt.Println("create rs from query failed")
}
for rs.HasNext() {
// Update row
rrow, err := rs.NextRow()
if err != nil {
fmt.Println("NextRow from rs failed")
}
tmp = rrow
fmt.Println("Person: name=", rrow[0], " status=", rrow[1], " count=", rrow[2], " lob=", rrow[3])
}
col.Commit()
fmt.Println(tmp)
return c.Render(http.StatusOK, "admin", "admin")
}
My container is properly being written but for some reason the querying isn't working. This is rather basic code so I expect there's some minor detail I'm missing somewhere.
As of now, I'm getting errors here: "get container failed". My error could either be from writing or querying, though I suspect it's from querying.
Can you try initializing your container by using the CreateContainerInfo instead? It will create container if it doesn't exist, but if it does exist, it's a more robust method of calling your container.
conInfo, _ := griddb_go.CreateContainerInfo("Users",
[][]interface{}{
{"email", griddb_go.TYPE_STRING},
{"password", griddb_go.TYPE_BLOB}},
griddb_go.CONTAINER_COLLECTION,
true)
col, _ := gridstore.PutContainer(conInfo, false)
I'm working on an appengine app using the datastore. I'm attempting to gob
encode an interface and store it into the datastore. But when I try to load from
the datastore, I get the error:
gob: name not registered for interface: "main27155.strand"
The peculiar thing is that the load() method starts working after having
called the save() method. It no longer returns an error, and everything saved
in the datastore is loaded as expected. But when I restart the intance, the
load() method stops working again.
The load and save methods I mention refer to the methods defined by the
datastore.PropertyLoadSaver interface
From the looks of it, it seems like a problem with registering the
type/interfaces with gob, but I have exactly the same gob.Register() calls in
both the load() and save() methods.
I even tried removing the gob.Register() calls from both load and save methods
and adding it to init(). The exact same behavior was observed.
How can I load my datastore on a cold start?
type bio struct {¬
Id string¬
Hp int¬
godie chan bool //should be buffered¬
dam chan int¬
Genetics dna¬
}¬
type dna interface {
decode() mRNA
Get(int) trait
Set(int, trait)
Duplicate() dna
Len() int
}
type trait interface {
mutate() trait
}
// implements dna{}
type strand []trait
// implements trait{}
type tdecoration string
type color struct {
None bool // If true, colors are not shown in theme
Bg bool // If true, color is a background color
R int // 0-255
G int
B int
}
.
func start(w http.ResponseWriter, r *http.Request) error {
c := appengine.NewContext(r)
var bs []bio
if _, err := datastore.NewQuery("bio").GetAll(c, &bs); err != nil {
log.Println("bs is len: ", len(bs))
return err
}
...
return nil
}
func stop(w http.ResponseWriter, r *http.Request) error {
c := appengine.NewContext(r)
log.Println("Saving top 20 colors")
var k []*datastore.Key
var bs []*bio
stat := getStats()
for i, b := range stat.Leaderboard {
k = append(k, datastore.NewKey(c, "bio", b.Id, 0, nil))
bv := b
bs = append(bs, &bv)
// At most 20 bios survive across reboots
if i > 178 {
break
}
}
// Assemble slice of keys for deletion
dk, err := datastore.NewQuery("bio").KeysOnly().GetAll(c, nil)
if err != nil {
return errors.New(fmt.Sprintf("Query error: %s", err.Error()))
}
fn := func(c appengine.Context) error {
// Delete all old entries
err := datastore.DeleteMulti(c, dk)
if err != nil {
return errors.New(fmt.Sprintf("Delete error: %s", err.Error()))
}
// save the elite in the datastore
_, err = datastore.PutMulti(c, k, bs)
if err != nil {
return err
}
return nil
}
return datastore.RunInTransaction(c, fn, &datastore.TransactionOptions{XG: true})
}
// satisfy datastore PropertyLoadSaver interface ===============================
func (b *bio) Load(c <-chan datastore.Property) error {
gob.Register(&color{})
gob.Register(new(tdecoration))
var str strand
gob.Register(str)
tmp := struct {
Id string
Hp int
Gengob []byte
}{}
if err := datastore.LoadStruct(&tmp, c); err != nil {
return err
}
b.Id = tmp.Id
b.Hp = tmp.Hp
return gob.NewDecoder(strings.NewReader(string(tmp.Gengob))).Decode(&(b.Genetics))
}
func (b *bio) Save(c chan<- datastore.Property) error {
defer close(c)
gob.Register(&color{})
gob.Register(new(tdecoration))
var str strand
gob.Register(str)
var buf bytes.Buffer
gen := b.Genetics
if err := gob.NewEncoder(&buf).Encode(&gen); err != nil {
log.Println(err)
return err
}
dp := []datastore.Property{
{Name: "Id", Value: b.Id},
{Name: "Hp", Value: int64(b.Hp)},
{Name: "Gengob", Value: buf.Bytes(), NoIndex: true},
}
for _, p := range dp {
c <- p
}
return nil
}
Additional info: This behavior was not present before I stuffed the datastore
calls in stop() into datastore.RunInTransaction()
Register all types an in init() functions using RegisterName(). Delete all existing data from the store and you should be good to go.
App Engine generates a mangled name for the main package every time the application is built. The name generated by Register() includes this mangled package name. Any gobs encoded with the mangled name will only be readable using the same build of the app. If you cause the application to be rebuilt by modifying the code, then the app will not be able to decode gobs stored previously.