i wrote a rest api that connects to google firestore as a backend. I have a users collection and want to ensure that each user document has a unique username. My first approach was to query the database for a matching document with same username and if it is found, then the user has to choose another one (on registration).
Here is my corresponding code (golang)
func (service *Service) CreateUser(ctx context.Context, user domains.User) (domains.User, error) {
var err error
err = domains.ValidateNewUser(user)
if err != nil {
return user, err
}
// Check if email already exists
unique, err := service.CheckEmailUniqueness(ctx, user.Email)
if err != nil {
return user, err
} else if !unique {
return user, ErrEmailExists
}
// Check if user already exists
unique, err = service.CheckUsernameUniqueness(ctx, user.Username)
if err != nil {
return user, err
} else if !unique {
return user, ErrUsernameExists
}
docRef, _, err := service.Client.Collection("users").Add(ctx, user)
if err != nil {
return user, ErrCreatingUser
}
user.UserID = docRef.ID
return user, nil
}
func (service *Service) CheckUsernameUniqueness(ctx context.Context, username string) (bool, error) {
iter := service.Client.Collection("users").Where("username", "==", username).Documents(ctx)
usernameExists := false
for {
_, err := iter.Next()
if err == iterator.Done {
break
}
if err != nil {
return false, err
}
usernameExists = true
break
}
return !usernameExists, nil
}
But how can you prevent race conditions in this case? Like when 2 different users want to take the same username and their requests are processed by two different instances of my rest server. I am very inexperienced with firestore and NoSQL in general, so pls excuse me if i dont understand anything crucial.
Transactions are the answer, but you can't do queries inside of transactions, only fetching by ID
https://stackoverflow.com/a/50071736/4458510
A possible solution would be to make the ID the email/username, however if a user wants to change their email then the ID would have to change and that bad since other things will probably reference that user.
A common pattern is to have another collection for the enforcement of uniqueness. You would then interact with both the 'users' collection and the 'user-emails' collection within the transaction
Related
i've been trying to wrap my head around unit testing, dependency injection, tdd and all that stuff and i've been stuck on testing functions that make database calls, for example.
Let's say you have a PostgresStore struct that takes in a Database interface, which has a Query() method.
type PostgresStore struct {
db Database
}
type Database interface {
Query(query string, args ...interface{}) (*sql.Rows, error)
}
And your PostgresStore has a GetPatients method, which calls database query.
func (p *PostgresStore) GetPatients() ([]Patient, error) {
rows, err := p.db.Query("SELECT id, name, age, insurance FROM patients")
if err != nil {
return nil, err
}
defer rows.Close()
items := []Patient{}
for rows.Next() {
var i Patient
if err := rows.Scan(
&i.ID,
&i.Name,
&i.Surname,
&i.Age,
&i.InsuranceCompany,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
In the real implementation, you would just pass a *sql.DB as Database argument, but how would you guys write a unit test with a fake database struct?
let me try to clarify some of your doubts. First of all, I'm gonna share a working example to better understand what's going on. Then, I'm gonna mention all of the relevant aspects.
repo/db.go
package repo
import "database/sql"
type Patient struct {
ID int
Name string
Surname string
Age int
InsuranceCompany string
}
type PostgresStore struct {
// rely on the generic DB provided by the "sql" package
db *sql.DB
}
func (p *PostgresStore) GetPatient(id int) ([]Patient, error) {
rows, err := p.db.Query("SELECT id, name, age, insurance FROM patients")
if err != nil {
return nil, err
}
defer rows.Close()
items := []Patient{}
for rows.Next() {
var i Patient
if err := rows.Scan(
&i.ID,
&i.Name,
&i.Surname,
&i.Age,
&i.InsuranceCompany,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
Here, the only relevant change is how you define the PostgresStore struct. As the db field, you should rely on the generic DB provided by the database/sql package of the Go Standard Library. Thanks to this, it's trivial to swap its implementation with a fake one, as we're gonna see later.
Please note that in the GetPatient method you're accepting an id parameter but you're not using it. Your query is more suitable to a method like GetAllPatients or something like that. Be sure to fix it accordingly.
repo/db_test.go
package repo
import (
"testing"
"github.com/DATA-DOG/go-sqlmock"
"github.com/stretchr/testify/assert"
)
func TestGetPatient(t *testing.T) {
// 1. set up fake db and mock
db, mock, err := sqlmock.New()
if err != nil {
t.Fatalf("err not expected: %v", err)
}
// 2. configure the mock. What we expect (query or command)? The outcome (error vs no error).
rows := sqlmock.NewRows([]string{"id", "name", "surname", "age", "insurance"}).AddRow(1, "john", "doe", 23, "insurance-test")
mock.ExpectQuery("SELECT id, name, age, insurance FROM patients").WillReturnRows(rows)
// 3. instantiate the PostgresStore with the fake db
sut := &PostgresStore{
db: db,
}
// 4. invoke the action we've to test
got, err := sut.GetPatient(1)
// 5. assert the result
assert.Nil(t, err)
assert.Contains(t, got, Patient{1, "john", "doe", 23, "insurance-test"})
}
Here, there are a lot to cover. First, you can check the comments within the code that give you a better idea of each step. In the code, we're relying on the package github.com/DATA-DOG/go-sqlmock that allows us to easily mock a database client.
Obviously, the purpose of this code is to give a general idea on how to implement your needs. It can be written in a better way but it can be a good starting point for writing tests in this scenario.
Let me know if this helps, thanks!
A common pattern in monolithic application design is to delegate business logic to a dedicated service, passing in an open transaction as, for example, a javax.persistence.EntityTransaction instance in Java, or an sql.Transaction in Go.
Go example:
// business.go
type BusinessLogicService interface {
DoSomething(tx *sql.Transaction)
}
type businessLogicService struct {
}
func (s *BusinessLogicService) DoSomething(tx *sql.Transaction) {
tx.ExecuteContext(.....)
}
func NewBusinessLogicService() {
return &businessLogicService{}
}
// server.go
ctx := context.Background()
tx, err := db.BeginTx(ctx)
if err != nil {
log.Fatal(err)
}
bls := business.NewBusinessLogicService()
bls.DoSomething(tx)
tx.Commit()
Could the same effect be achieved in an architecture where each of these components are implemented in a different language/runtime? In such an application, Postgres is responsible for doing the 'bookkeeping' in relation to the DB transaction. It seems to me that it should be possible to pass a similar 'handle' for the transaction to another process to read its state and append operations.
For example the equivalent business logic is provided as a gRPC service with the following definition:
message TransactionInfo {
string transaction_id = 1;
}
message DoSomethingRequest {
TransactionInfo transaction_info = 1;
}
message DoSomethingResponse {
}
service BusinessLogicService {
rpc DoSomething(DoSomethingRequest) returns (DoSomethingResponse)
}
The server process BEGINs the transaction and passes a reference to this BusinessLogicService.
ctx := context.Background()
tx, err := db.BeginTx(ctx)
if err != nil {
log.Fatal(err)
}
conn, err := grpc.Dial(*serverAddr, opts...)
if err != nil {
...
}
defer conn.Close()
bls := pb.NewBusinessLogicClient()
/// SOMEHOW PASS THE TX OBJECT TO THE REMOTE SERVICE
txObj := &pb.TransactionInfo{....???????????.....}
result := bls.DoSomething(txObj)
tx.Commit()
Is this possible with Postgres or another DBMS?
I want to create a database driven application using Golang. I am trying to do it TDD way.
When I try to test methods that make Sql queries, What all are the packages available ?
I don't want to connect to the default database that I use for development. I can write code to take up another test database while running a test, but is there any go library that already does it.
Is there any library that does db tests without connecting to database at all ?
What is the standard way to do database test with golang ?
I had a similar question not long ago when refactoring some of my own tests, and there's a couple of ways you can do it:
a) Provide an exported type and an Open or Connect function that returns it - e.g.
type DB struct {
db *sql.DB
}
// Using http://jmoiron.github.io/sqlx/ for this example, but
// it has the same interface as database/sql
func Open(opts *Options) (*DB, error) {
db, err := sqlx.Connect(opts.Driver, fmt.Sprintf("host=%s user=%s dbname=%s sslmode=%s", opts.Host, opts.User, opts.Name, opts.SSL))
if err != nil {
return nil, err
}
return &DB{db}, nil
}
... and then each of your tests, write setup & teardown functions that return an instance of *DB that you define your database functions on (as methods - i.e. func (db *DB) GetUser(user *User) (bool, error)):
// Setup the test environment.
func setup() (*DB, error) {
err := withTestDB()
if err != nil {
return nil, err
}
// testOptions is a global in this case, but you could easily
// create one per-test
db, err := Open(testOptions)
if err != nil {
return nil, err
}
// Loads our test schema
db.MustLoad()
return db, nil
}
// Create our test database.
func withTestDB() error {
db, err := open()
if err != nil {
return err
}
defer db.Close()
_, err = db.Exec(fmt.Sprintf("CREATE DATABASE %s;", testOptions.Name))
if err != nil {
return err
}
return nil
}
Note that this is somewhat "integration" testing, but I strongly prefer to test against a "real" database since mocking the interface won't help you catch issues with your queries/query syntax.
b) The alternative, although less extensible on the application side, is to have a global db *sql.DB variable that you initialise in init() within your tests—since tests have no guaranteed order you'll need to use init()—and then run your tests from there. i.e.
var db *sql.DB
func init() {
var err error
// Note the = and *not* the assignment - we don't want to shadow our global
db, err = sqlx.Connect(...)
if err != nil {
...
}
err := db.loadTestSchema
// etc.
}
func TestGetUser(t *testing.T) {
user := User{}
exists, err := db.GetUser(user)
...
}
You can find some practical examples in drone.io's GitHub repo, and I'd also recommend this article on structuring Go applications (especially the DB stuff).
I use a global variable to store the data source (or connection string) of current database and set to different value in test function. Since there is only one database I need to operate so I choose the easiest way.
I have a table in the database containing user account information. I have a struct called User defined.
type User struct {
Id uint
Username string
Password string
FirstName string
LastName string
Address1 string
Address2 string
.... a bunch more fields ...
}
For fetching individual user accounts, I have a method defined
func (user *User) GetById(db *sql.DB, id uint) error {
query := `SELECT
...a whole bunch of SQL ...
WHERE id = $1
... more SQL ...
LIMIT 1`
row := db.QueryRow(query, id)
err := row.Scan(
&user.Id,
&user.UserName,
&user.Password,
&user.FirstName,
&user.LastName,
... some 20 more lines of fields read into the struct ...
)
if err != nil {
return err
}
return nil
}
And there are several places in the system where I need to fetch user information as part of a larger query. That is, I am fetching some other type of object, but also a user account related to it.
That means, I have to repeat the whole rows.Scan(&user.Username, &user...) thing over and over again and it takes a whole page and it is error prone and if I ever change the user table structure I would have to change the code in a whole bunch of places. How can I make this more DRY?
Edit: I am not sure why this was marked as a duplicate, but since this edit is required, I will try to explain one more time. I am not asking how to scan a row into a struct. I already know how to do that, as the code above clearly shows. I am asking how to structure the struct scanning code in such a way that I do not have to repeat the same page of scanning code every time I am scanning the same type of struct.
Edit: also, yes, I am aware of sqlstruct and sqlx and similar libraries. I am deliberately avoiding these, because they depend on reflect package with well documented performance issues. And I intend to potentially scan millions of rows using these techniques (not millions of users, but this question extends to other record types).
Edit: so, yes, I know I should write a function. I am not sure what this function should take as arguments and what results it should return. Lets say that the other query I want to accommodate looks like this
SELECT
s.id,
s.name,
... more site fields ...
u.id,
u.username,
... more user fields ...
FROM site AS s
JOIN user AS u ON (u.id = s.user_id)
JOIN some_other_table AS st1 ON (site.id = st1.site_id)
... more SQL ...
And I have a site struct method that embeds a user struct. I don't want to repeat the user scanning code here. I want to call a function that will scan the user portion of the raw into a user struct the same way it does in the user method above.
To eliminate the repetition of the required steps to scan the *sql.Rows structure you could introduce two interfaces. One that describes the already implemented behaviour of *sql.Rows and *sql.Row.
// This interface is already implemented by *sql.Rows and *sql.Row.
type Row interface {
Scan(...interface{}) error
}
And another one that abstracts away the actual scanning step of the row(s).
// have your entity types implement this one
type RowScanner interface {
ScanRow(Row) error
}
An example implementation of the RowScanner interface could look like this:
type User struct {
Id uint
Username string
// ...
}
// Implements RowScanner
func (u *User) ScanRow(r Row) error {
return r.Scan(
&u.Id,
&u.Username,
// ...
)
}
type UserList struct {
Items []*User
}
// Implements RowScanner
func (list *UserList) ScanRow(r Row) error {
u := new(User)
if err := u.ScanRow(r); err != nil {
return err
}
list.Items = append(list.Items, u)
return nil
}
With these interfaces you can now dry your rows-scanning code for all of your types that implement the RowScanner interface by using these two functions.
func queryRows(query string, rs RowScanner, params ...interface{}) error {
rows, err := db.Query(query, params...)
if err != nil {
return err
}
defer rows.Close()
for rows.Next() {
if err := rs.ScanRow(rows); err != nil {
return err
}
}
return rows.Err()
}
func queryRow(query string, rs RowScanner, params ...interface{}) error {
return rs.ScanRow(db.QueryRow(query, params...))
}
// example
ulist := new(UserList)
if err := queryRows(queryString, ulist, arg1, arg2); err != nil {
panic(err)
}
// or
u := new(User)
if err := queryRow(queryString, u, arg1, arg2); err != nil {
panic(err)
}
If you have composite types that you want to scan but you want to avoid having to repeat the enumeration of its elements' fields, then you could introduce a method that returns a type's fields and reuse that method where you need it. For example:
func (u *User) ScannableFields() []interface{} {
return []interface{}{
&u.Id,
&u.Username,
// ...
}
}
func (u *User) ScanRow(r Row) error {
return r.Scan(u.ScannableFields()...)
}
// your other entity type
type Site struct {
Id uint
Name string
// ...
}
func (s *Site) ScannableFields() []interface{} {
return []interface{}{
&p.Id,
&p.Name,
// ...
}
}
// Implements RowScanner
func (s *Site) ScanRow(r Row) error {
return r.Scan(s.ScannableFields()...)
}
// your composite
type UserWithSite struct {
User *User
Site *Site
}
// Implements RowScanner
func (u *UserWithSite) ScanRow(r Row) error {
u.User = new(User)
u.Site = new(Site)
fields := append(u.User.ScannableFields(), u.Site.ScannableFields()...)
return r.Scan(fields...)
}
// retrieve from db
u := new(UserWithSite)
if err := queryRow(queryString, u, arg1, arg2); err != nil {
panic(err)
}
I am working on a multi tenant application, I need to query a particular user from a KIND and From Particular Namespace.
I am able to get the values from default Namespace.the package i am using here is "google.golang.org/appengine/datastore"
q := datastore.NewQuery(ENTITYNAME).Filter("Name =", ed.Expense.Name)
var expenses []ExpenseEntiry
return q.GetAll(ed.Ctx, &expenses)
The namespace value is not part of the query (it's not a property of the query). The namespace comes from the context which you pass when executing the query, e.g. to Query.GetAll().
If you have a context (you do as you pass it to q.GetAll()), you can create a derivative context with a given namespace using the appengine.Namespace() function.
For example:
ctx2, err := appengine.Namespace(ed.Ctx, "mynamespace")
// check err
And use this new context to pass to Query.GetAll():
return q.GetAll(ctx2, &expenses)
It is rare that you need to create a new context with a different namespace, ed.Ctx should already be a context with the right namespace. So when / where you create ed.Ctx, you should already apply the namespace there, so you can avoid "accidental" exposure of data of other tenants (which is a major security issue).
If you are using the old lib: google.golang.org/appengine/datastore, then you need to create the context with the namespace:
ctx2, err := appengine.Namespace(ed.Ctx, "mynamespace")
if err != nil {
return err
}
But you WANT to be using the latest lib: cloud.google.com/go/datastore. The Namespace can be set directly on the Query object. This is new. You must then run the query using datastoreClient.Run(ctx, query).
func deleteTestNamespace(ctx context.Context, namespaces string) error {
dsClient, err := datastore.NewClient(ctx, log, datastore.Config{...})
err := dsClient.DeleteMulti(ctx, keys[i:i+chunk])
if err != nil {
return err
}
var keys []*datastore.Key
for _, kind := range envKinds {
// Get all keys
query := datastore.NewQuery(kind).KeysOnly().Namespace(namespace)
it := dsClient.Run(ctx, query)
for {
var key datastore.Key
_, err := it.Next(&key)
if err == iterator.Done {
break
}
if err != nil {
return err
}
keys = append(keys, &key)
}
// Delete all records in chunks of 500 or less
for i := 0; i < len(keys); i += 500 {
chunk := min(len(keys)-i, 500)
err := dsClient.DeleteMulti(ctx, keys[i:i+chunk])
if err != nil {
return err
}
}
}
return nil
}
func min(num1 int, num2 int) int {
if num1 < num2 {
return num1
}
return num2
}